But examining the technical data sheet for Porta 400 (for example), I see that the curves are not merely offset, but also of different slopes. So simply the modifying the exposures for each layer won't produce true neutrals throughout the full range of luminances.
Hi, those curves are made by using a densitometer with a "status M" response. This uses three pretty narrow spectral slices to see the film. Whereas the photographic paper designed to be a match for the film has a much broader spectral response. Meaning that they don't necessarily see the film the same.
In your recent book by Giorgianni and Madden look up "printing density" vs "status M densitometry." The 1998 version of the book shows an example where the densitometer plots are not parallel (different slopes) but the printing densities ARE parallel.
I've done lots of work with Kodak pro portrait films up through Portra 160 NC, and in my experience, if you photograph a subject under studio flash, then print the negs onto the matching pro portrait paper, then view the prints under a "proper" viewing condition, you will find that the full range of neutrals are indeed very good.
1. How does normal chemical printing remove the mask?
In optical printing, removing the orange cast is simply a color balance issue. The RGB components are simply adjusted to give the desired color balance in the print. The paper is balanced to remove the orange cast entirely, and actually does more than it needs to. The overkill is corrected by the combination of tungsten lamp color and printer or enlarger filtration. The filtration is selected by machine or manually to give the final desired balance.1. How does normal chemical printing remove the mask?
I have two questions. First, should the exposures from a DSLR used as a scanner or an actual scanner match the spectral response properties of the RA-4 paper?
Second, how independent is each layer of the C-41 film when printing?
But e.g. in development I've heard the layers are *not* independent, with denser portions of each layer "stealing" developer from the other two, or something along those lines. (My understanding of the development process is something else I need to brush up on...) If similar inter-layer interactions happen during RA-4 printing, then it won't be enough to correct the spectral response of each channel individually, and more serious monkeying around is needed.
The color paper has the paper balanced on the speed of the red layer. Thus the green is faster by the green Dmin and the blue is faster still by the blue Dmin. Then, for safety, the paper balance adds about 50R to make sure that the filter pack is, on average, not needing cyan filtration. And thus we have a red filter pack. The scanner works much the same way. It scans each layer r, g, b with higher "speeds" in each case, just like paper.
PE
(Though why publish the "status M" curves at all then?)
The mask isn't just a global color that can be removed with a filter or simple color correction sliders. Below are instructions I saw posted here that work well:
"The key step is pretty simple to do in Photoshop: sample the colour of the un-inverted negative rebate, make a new layer, fill the layer with the sampled colour, set blend mode to divide, flatten the layers, invert the image, clip RGB black & white points using warnings. Then fine colour adjustments & tonal balancing. The divide blending mode is essential - the mask is not a global colour - it's a mask that's formed inversely proportional to exposure & must be removed as such. If you do so, you're well on your way to manually matching how an optical print responds.
It takes considerably more time to describe than do! Main area of trouble people tend to have is judging how far to clip the individual black/ white points in each of the RGB channels in curves. Best solution I've found is to clip the black point till the rebate has a good black, and the white until just before it starts to clip in the image area. Other important thing is that black points must be set first. Far too often, the preset driven programmes are excessively aggressive with bp/ wp settings when compared to manual controls. In comparison, the Fuji Frontier (for example) tends to clip 'white' to outright white, then adjust the output back to an L of 95 amongst a whole series of other oddities that rather stifle the range of many films. It makes sense in the context of that sort of minilab, but if you want something more akin to what an optical print might deliver, I've found manual clipping to be significantly better. "
What's going on with a optical print vs a computer aren't the same thing. I have tried doing normal color balancing to "filter" out the orange on scanned negs and I ended up fighting trying to get something close to decent. When I started using the instructions I posted, it was fairly simple to get nice results. Just my experience, YMMV.forming a UNIFORM orange color all over the negative, which can then be filtered out (happens when a print is made).
Would I then see the "true" color of the film as envisioned by Kodak engineers?
I have two questions. First, should the exposures from a DSLR used as a scanner or an actual scanner match the spectral response properties of the RA-4 paper? To be precise, I mean this in the following way. RA-4 paper has different speeds for each color. I can take a DSLR scan of a C-41 neg and look at the resulting data "linearly," meaning as a count of the photons of each color captured. Adjusting the "speed" then means adjusting the color channel. Using the film rebate as a neutral reference, I can multiply each channel by a certain coefficient, chosen individually for each channel, so that the rebate is black. My understanding is that this was accomplished in the analog world through color filters. After the filters were applied to RA-4, things then were (more or less) correct throughout the luminance range. So should I expect good neutrality throughout the range digitally after applying the per-channel multipliers? Posts in the another thread indicated that I should not, which makes me think the camera sensor is seeing things differently than RA-4 paper is.
You are correct about coefficients doing exactly what filtration does. However in the case of DSLR because of the way it is built it can never replicate the sensitivity of the paper. (at least without expensive modification).
I see. This suggests that fixing the speeds with the coefficients as well as possible then fiddling with the individual curves subjectively to get something pleasing to the eye probably has the best work/reward ratio of all correction methods. Because of the differing spectral responses it seems difficult to do better unless you want to use a spectrometer on a RA4 print of a ColorChecker, or something equally annoying.
Returning somewhat to the original topic, do you know where I could find information on the "shortcuts" or "fudges" the scanners take beyond simply adjusting per-channel exposure? This might aid my subjective fiddling.
Ok, lets start. Using the film Red Dmin as zero, then the Green is about 0.8 and the Blue is about 1.2. Taking this to the paper, the Red speed is zero, the Green is +0.8 log E faster, and the Blue is +1.2 log E faster in this example. Add a 50R to this and the G is 1.3 faster and the B is 1.7 faster. Thus, when you print it with a tungsten bulb with a 50R, the image is "nominally" neutral and correctly balanced.
But you would be well served to at least read the section on negatives in the Giorgianni and Madden text.
Very few image scanners measure negatives in terms of printing densities; most measure values that are closer to Status M densities. As a result, RGB scanned values alone are not accurate predictors of printing-density metamerism. For example, areas on two different negative films might have identical printing densities, but their RGB scanned values might indicate that the two areas are different.
A simpllistic correction is can be made using a 1D transformation but it is still not correct.
Why?
This is easy to do to digitally, first you turn all your measurements into density values. So if your Dmin was already at zero, no further change would be required for red, but lets say it was 0.3 then just subtract 0.3 from all red density values, for green subject .8 or (.11) and for blue 1.2 or (.14), then convert all you measurements back from density values to pixel values and you have an reasonable correct image. The problems that remain is your measurements didn't actually match the density as the paper saw it, a neutral will not remain neutral across the midtones. A simplistic correction can be made using a 1D transformation but it is still not correct. Also you colour space model you used does not match the dyes of your print stock.
You don't of course need to compute log values for density but instead can work with the raw numbers, but it is easier to illustrate that way with reference to your example.
What scanner is that and what software do you use?C41 and ECN dyes are not selected to be the best for the human eye as E6 dyes are. The negative films are selected to be the best match for the sensitivity of the paper, and the paper dyes are selected to be the best match for the human eye. The scanner has a fixed sensitivity, but it must adapt for both negatives and positives which complicate the situation.
In any event, the scanner I have has no problems with negatives or positives and I don't have to apply any corrections at all.
PE
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?