yes you are dreaming... sliders on lightroom will work quite easily.Is there such a thing? Just a simple preset to load in curves, and voila...
Or is it just a dream?
Repost of something I read about reversing color negs.
"The key step is pretty simple to do in Photoshop: sample the colour of the un-inverted negative rebate, make a new layer, fill the layer with the sampled colour, set blend mode to divide, flatten the layers, invert the image, clip RGB black & white points using warnings. Then fine colour adjustments & tonal balancing. The divide blending mode is essential - the mask is not a global colour - it's a mask that's formed inversely proportional to exposure & must be removed as such. If you do so, you're well on your way to manually matching how an optical print responds."
Is there such a thing? Just a simple preset to load in curves, and voila...
Or is it just a dream?
Buy a color enlarger and you won't have to worry about it.
The irony is that I don't understand anything after the statement, "pretty simple to do in Photoshop"!Repost of something I read about reversing color negs.
"The key step is pretty simple to do in Photoshop: sample the colour of the un-inverted negative rebate, make a new layer, fill the layer with the sampled colour, set blend mode to divide, flatten the layers, invert the image, clip RGB black & white points using warnings. Then fine colour adjustments & tonal balancing. The divide blending mode is essential - the mask is not a global colour - it's a mask that's formed inversely proportional to exposure & must be removed as such. If you do so, you're well on your way to manually matching how an optical print responds."
if memory serves, I was part of that discussion. That can get results, but the divide blend mode isn’t doing what the original author intended.
As the original author of the quoted statement, as I now understand it, the sample and divide can get pretty close to the effect of printing with a 3200K source and 50R correction - which is a global correction, not what I wrote earlier. I'm disinclined to re-linearise the sensitometric curves of the film in the digital side (as opposed to ensuring that the reproduction medium is adequately linear) because it may potentially destroy the characteristics of the film chosen in the first place. Increasingly I'm more interested in seeing if I can adequately model the effects of the paper curve etc.
If done with a bit of care, the methodology will run very close to a good RA4 sort of look, but it'll also make pretty clear whether or not you've crossed curves through poor film process control (though those are correctable too in the individual channels). In a moment of boredom, I combined the whole procedure into a 3D LUT for Portra 400 and an Imacon scanner & it works pretty reliably - but I'd suggest that it's not a universal solution - I'd tend to say that it'll need tailored to each film and image capture device. The more 'universal' systems like the Frontier etc seem to perform some fairly brutal 'corrections' to force fit the film into an understanding of colour controllable by semi-skilled operators. In theory, given the standardisations of C-41, it should be possible to design a universal inversion model that works better than that of the Fuji Frontier etc for a given light source and sensor.
The irony is that I don't understand anything after the statement, "as I now understand it".
I do know the technique I quoted gives me pretty good results.
As the original author of the quoted statement, as I now understand it, the sample and divide can get pretty close to the effect of printing with a 3200K source and 50R correction - which is a global correction, not what I wrote earlier. I'm disinclined to re-linearise the sensitometric curves of the film in the digital side (as opposed to ensuring that the reproduction medium is adequately linear) because it may potentially destroy the characteristics of the film chosen in the first place. Increasingly I'm more interested in seeing if I can adequately model the effects of the paper curve etc.
If done with a bit of care, the methodology will run very close to a good RA4 sort of look, but it'll also make pretty clear whether or not you've crossed curves through poor film process control (though those are correctable too in the individual channels). In a moment of boredom, I combined the whole procedure into a 3D LUT for Portra 400 and an Imacon scanner & it works pretty reliably - but I'd suggest that it's not a universal solution - I'd tend to say that it'll need tailored to each film and image capture device. The more 'universal' systems like the Frontier etc seem to perform some fairly brutal 'corrections' to force fit the film into an understanding of colour controllable by semi-skilled operators. In theory, given the standardisations of C-41, it should be possible to design a universal inversion model that works better than that of the Fuji Frontier etc for a given light source and sensor.
I have those old threads bookmarked. As I am new to color film, my problem is that I do not have a frame of reference i.e. how a particular emulsion is supposed to look like? I have the same photo scanned by my lab, Epson V600 with Epson software, Plustek 120 Pro with Silverfast and with my camera and inverted by hand in Affinity Photo. All four scans look different!
Instead of Photoshop curves, I want a "reference" 16-bit linear TIFF in AdobeRGB space of a standard color target shot on every Portra variant by Kodak. I can make my own curves from there.
@Adrian Bacon This is more or less what I am trying to do (I have an X-Rite kit). The problem with it is that if I "overfit" to such reference I lose the unique color profile of the emulsion and end up with the same result as my digital camera. So I only align grey patches, but in this case I don't know if my development is on target. Regarding that, you have answered my cyan question earlier, see I couldn't tell if cyan was part of the charm, or my temps dropped too much.
ugh... sometimes I feel there's no way around it, as doing any kind of color adjustments is visually hard as you're staring at a freshly inverted desaturated gamma-1 image, and I will have no choice but dust off my C++ skills.
and speaking of that, why did you feel the need to make separate gray card exposures for each stop, vs having a single card with a gray scale (like the one often used to visualize the zone values)?
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |