How do scanners color correct C-41 negatives?

Near my home.jpg

A
Near my home.jpg

  • 1
  • 0
  • 5
Woodland Shoppers

A
Woodland Shoppers

  • 1
  • 0
  • 13
On The Mound

A
On The Mound

  • 0
  • 2
  • 43
What's Shakin'?

A
What's Shakin'?

  • 4
  • 0
  • 41

Recent Classifieds

Forum statistics

Threads
198,454
Messages
2,775,514
Members
99,622
Latest member
ebk95
Recent bookmarks
0

Derek L

Member
Joined
Jul 28, 2019
Messages
49
Location
Boston
Format
Digital
Consider taking a DSLR scan of a C-41 negative (raw, linear capture with no gamma correction). You get a negative that must be inverted.

However, it is well-known that due to the orange film base, simply inverting the RGB color channels will result in a positive with a teal cast. This is because the characteristic curves for each of the layers of the film are vertically offset. I've heard that scanners fix this by providing different exposures for the different color layers to compensate for the offset.

But examining the technical data sheet for Porta 400 (for example), I see that the curves are not merely offset, but also of different slopes. So simply the modifying the exposures for each layer won't produce true neutrals throughout the full range of luminances. See page 4 here: https://imaging.kodakalaris.com/sites/prod/files/files/resources/e4050_portra_400.pdf. They also differ in shape in the "shadow toe."

This leaves two possibilities. One, it was never the goal of C-41 film to produce true neutrals, and the slight color cast is an intentional part of the "film look," so the resulting cast doesn't matter (assuming the basic per-channel exposure adjustment described above). Or two, scanners are removing by doing something more sophisticated than just modifying exposure times per-color.

Which of these is true? And in either case, are details of the algorithms used by high-end scanners for color correction available anywhere? I suspect it's not much more than simple levels/curves/black-white point adjustments, but I'm curious how exactly those adjustments are being done so I can replicate them. Modifying the changes multiplicatively to simulate different per-channel exposures is trivially easy, but I'm not sure it's the whole story.

(Here I'm concerned with just matching the output of scanners, roughly. I'm aware that color can be "improved" by further tweaking and nonlinear curves fiddling, but I want to understand the fundamentals of the process first.)
 

Mr Bill

Member
Joined
Aug 22, 2006
Messages
1,470
Format
Multi Format
But examining the technical data sheet for Porta 400 (for example), I see that the curves are not merely offset, but also of different slopes. So simply the modifying the exposures for each layer won't produce true neutrals throughout the full range of luminances.

Hi, those curves are made by using a densitometer with a "status M" response. This uses three pretty narrow spectral slices to see the film. Whereas the photographic paper designed to be a match for the film has a much broader spectral response. Meaning that they don't necessarily see the film the same.

In your recent book by Giorgianni and Madden look up "printing density" vs "status M densitometry." The 1998 version of the book shows an example where the densitometer plots are not parallel (different slopes) but the printing densities ARE parallel.

I've done lots of work with Kodak pro portrait films up through Portra 160 NC, and in my experience, if you photograph a subject under studio flash, then print the negs onto the matching pro portrait paper, then view the prints under a "proper" viewing condition, you will find that the full range of neutrals are indeed very good.

To be clear, my experience in this optical printing is something like 15 years ago, and it's possible that the current papers aren't a good match anymore. I dunno. With digital printing the paper exposures are extremely short - the duration of the laser scan, or whatever - whereas the traditional optical prints where made with exposure times of around one-half second and longer. So reciprocity failure might be a big issue, and I think it's clear that today's papers have to be optimized for digital exposure.
 
Joined
Aug 29, 2017
Messages
9,376
Location
New Jersey formerly NYC
Format
Multi Format
Two related questions I thought someone could answer.
1. How does normal chemical printing remove the mask?
2. How does Epson flatbed scanners like the V600, V700 and V800 remove the mask?
 
OP
OP

Derek L

Member
Joined
Jul 28, 2019
Messages
49
Location
Boston
Format
Digital
Hi, those curves are made by using a densitometer with a "status M" response. This uses three pretty narrow spectral slices to see the film. Whereas the photographic paper designed to be a match for the film has a much broader spectral response. Meaning that they don't necessarily see the film the same.

Ah, yes, I remember this point from our previous conversation. Things make much more sense now. (Though why publish the "status M" curves at all then?)

In your recent book by Giorgianni and Madden look up "printing density" vs "status M densitometry." The 1998 version of the book shows an example where the densitometer plots are not parallel (different slopes) but the printing densities ARE parallel.

I've done lots of work with Kodak pro portrait films up through Portra 160 NC, and in my experience, if you photograph a subject under studio flash, then print the negs onto the matching pro portrait paper, then view the prints under a "proper" viewing condition, you will find that the full range of neutrals are indeed very good.

I have been busy with various things and shamefully have not yet read the book. This is a good reminder that I need to.

I have two questions. First, should the exposures from a DSLR used as a scanner or an actual scanner match the spectral response properties of the RA-4 paper? To be precise, I mean this in the following way. RA-4 paper has different speeds for each color. I can take a DSLR scan of a C-41 neg and look at the resulting data "linearly," meaning as a count of the photons of each color captured. Adjusting the "speed" then means adjusting the color channel. Using the film rebate as a neutral reference, I can multiply each channel by a certain coefficient, chosen individually for each channel, so that the rebate is black. My understanding is that this was accomplished in the analog world through color filters. After the filters were applied to RA-4, things then were (more or less) correct throughout the luminance range. So should I expect good neutrality throughout the range digitally after applying the per-channel multipliers? Posts in the another thread indicated that I should not, which makes me think the camera sensor is seeing things differently than RA-4 paper is.

Second, how independent is each layer of the C-41 film when printing? My current thinking is that solving the color correction problem means finding individual maps for each of the three color layers, and then once each primary is individually correct (e.g. I have a full range of neutrals) the other colors will magically fall into place as a result. ("Fall into place" meaning "emulate the analogous good RA-4 print," not necessarily be accurate to the scene-referred color.) If the layers are truly independent and additive, then this follows from first principles, I think. But in development I've heard the layers are *not* independent, with denser portions of each layer "stealing" developer from the other two, or something along those lines. (My understanding of the development process is something else I need to brush up on...) If similar inter-layer interactions happen during RA-4 printing, then it won't be enough to correct the spectral response of each channel individually, and more serious monkeying around is needed.
 
Last edited:

Mr Bill

Member
Joined
Aug 22, 2006
Messages
1,470
Format
Multi Format
1. How does normal chemical printing remove the mask?

One can use colored filters until the color of the "mask" is exactly cancelled out. (This is with respect to the color sensitivity of the photo paper.) I say "mask" because it is not exactly only the mask part. The mask proper is used to cancel out unwanted color absorbances in the film dyes. When these unwanted absorbances are combined with the mask, the net result appears to be simply a colored filter over the film.

This is not the whole story, though. Color film is "balanced" for a specific color of exposing light; if this is not just right then there is a color error in the film. So additional printing corrections need to be done to cancel these out.
 

RPC

Member
Joined
Sep 7, 2006
Messages
1,626
Format
Multi Format
1. How does normal chemical printing remove the mask?
In optical printing, removing the orange cast is simply a color balance issue. The RGB components are simply adjusted to give the desired color balance in the print. The paper is balanced to remove the orange cast entirely, and actually does more than it needs to. The overkill is corrected by the combination of tungsten lamp color and printer or enlarger filtration. The filtration is selected by machine or manually to give the final desired balance.

Theoretically, the curves of a film should be parallel, but I can't say with certainty whether a manufacturer might build an intentional bias into the film. Plots of different films vary slightly, and some appear more perfectly parallel than others, assuming proper processing. Any slight unintended deviation probably isn't noticeable.

As far a I know the paper does nothing to correct the film curves other than the RGB levels as it shouldn't have to.
 
Last edited:

Mr Bill

Member
Joined
Aug 22, 2006
Messages
1,470
Format
Multi Format
I have two questions. First, should the exposures from a DSLR used as a scanner or an actual scanner match the spectral response properties of the RA-4 paper?

Let me start out by saying that I've never seriously worked with scanners nor do I have any experience trying to make corrective software for them. So whatever I say is just hypothetical (maybe speculation is a better word) on my part.

I would think that one way to "skin the cat" would be to mimic the paper, which means that the spectral sensitivity should match the paper. Then you would want to simulate the nonlinearities of the paper response, which I have no idea how you would go about doing that.

It is often said that it's possible to perform a transform to convert any scanner spectral sensitivity into any other, but I'm not entirely convinced. I don't know exactly where the masking dyes sit spectrally, but if a narrow-band scanner doesn't see parts of the mask I don't see how it could possibly be reconstructed.

Second, how independent is each layer of the C-41 film when printing?

I don't know; I suspect not as independent as one would hope.

But e.g. in development I've heard the layers are *not* independent, with denser portions of each layer "stealing" developer from the other two, or something along those lines. (My understanding of the development process is something else I need to brush up on...) If similar inter-layer interactions happen during RA-4 printing, then it won't be enough to correct the spectral response of each channel individually, and more serious monkeying around is needed.

I suspect the monkeying around is ultimately needed for the best quality.

I sort of wish I had gotten involved with scanners back in the day; it looks like a lot of fun.

Another option you might consider at this point is to buy one of those expensive ($300 +") Xrite test targets - the ColorChecker SG which was sold for making digital camera profiles. They come with spectral data so it's possible to calculate appearance under different light sources, etc. (You would look at small spectral steps, multiplying the light source times the spectral reflectivity of each test patch to get an effective spectral makeup. Then you could multiply this to CIE "color matching functions" for human vision to get CIE color coordinates.) If you were to photograph the chart, then process and scan the film, you ought to be able to compare how the original "color" compares to your scanner results. This would seem to be an ideal way to test your theories on how to program the scanner output.

Sorry I can't give any real practical help.
 

Photo Engineer

Subscriber
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
Well, it is clear to me that none of you completely understand neg-pos color systems. Sorry guys. No offense intended here, but I made my living doing this work for 15 years.

The fim appears to have an unbalanced neutral due to interimage and masking effects which correct the film densities when scanned or printed.

The color paper has the paper balanced on the speed of the red layer. Thus the green is faster by the green Dmin and the blue is faster still by the blue Dmin. Then, for safety, the paper balance adds about 50R to make sure that the filter pack is, on average, not needing cyan filtration. And thus we have a red filter pack. The scanner works much the same way. It scans each layer r, g, b with higher "speeds" in each case, just like paper.

The film is read by, and matches closely to, Status M while the paper matches Status A or D.

PE
 
OP
OP

Derek L

Member
Joined
Jul 28, 2019
Messages
49
Location
Boston
Format
Digital
The color paper has the paper balanced on the speed of the red layer. Thus the green is faster by the green Dmin and the blue is faster still by the blue Dmin. Then, for safety, the paper balance adds about 50R to make sure that the filter pack is, on average, not needing cyan filtration. And thus we have a red filter pack. The scanner works much the same way. It scans each layer r, g, b with higher "speeds" in each case, just like paper.

PE

It's great to see you in here.

Just to make sure I understand, let me try to translate into a language (digital) I'm more familiar with. The digital sensor measures some number of incoming red, green, and blue photons. I understand digital ISO—which matches the effect of the film ISO by definition—to be, essentially, multiplication of the number of photons seen by some coefficient, with larger ISO numbers meaning larger coefficients. (The details are more complicated but irrelevant to the present discussion.) So suppose I scale the green and blue photon measurements by some numbers less than 1 to match the red photon measurements (where the exact coefficients for green and blue would have to be determined by experimentation). Would I then see the "true" color of the film as envisioned by Kodak engineers? Or at least mimic perfectly the different per-channel exposure times of the scanner?
 
Last edited:

RPC

Member
Joined
Sep 7, 2006
Messages
1,626
Format
Multi Format
If our answers were so far off I think it is just a matter of not exactly knowing what the OP was after. PE, perhaps you could clarify a few things.

If the film has unbalanced neutrals, How does the paper correct for this, or does it? I don't see any problem with neutrals, e.g. gray scales in my prints. Is that what is being discussed? It is not clear how they are a problem.

I also would like to know once and for all about what I have believed all along, that the paper cancels out the orange negative color (the layer speed differences you mentioned) with some overkill, corrected primarily by filtration, as I said earlier. Is that incorrect?

Thanks for clarifying.
 

StepheKoontz

Member
Joined
Dec 4, 2018
Messages
801
Location
Doraville
Format
Medium Format
The mask isn't just a global color that can be removed with a filter or simple color correction sliders. Below are instructions I saw posted here that work well:

"The key step is pretty simple to do in Photoshop: sample the colour of the un-inverted negative rebate, make a new layer, fill the layer with the sampled colour, set blend mode to divide, flatten the layers, invert the image, clip RGB black & white points using warnings. Then fine colour adjustments & tonal balancing. The divide blending mode is essential - the mask is not a global colour - it's a mask that's formed inversely proportional to exposure & must be removed as such. If you do so, you're well on your way to manually matching how an optical print responds.

It takes considerably more time to describe than do! Main area of trouble people tend to have is judging how far to clip the individual black/ white points in each of the RGB channels in curves. Best solution I've found is to clip the black point till the rebate has a good black, and the white until just before it starts to clip in the image area. Other important thing is that black points must be set first. Far too often, the preset driven programmes are excessively aggressive with bp/ wp settings when compared to manual controls. In comparison, the Fuji Frontier (for example) tends to clip 'white' to outright white, then adjust the output back to an L of 95 amongst a whole series of other oddities that rather stifle the range of many films. It makes sense in the context of that sort of minilab, but if you want something more akin to what an optical print might deliver, I've found manual clipping to be significantly better. "
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
(Though why publish the "status M" curves at all then?)

My guess is that spectral bands for chosen for Status-M were somewhat arbitrary, with a view towards accuracy with respect to process control etc. The narrow measurements bands are narrower than the paper, so they can never match exactly so why bother? if yout goal is measurement accuracy with respect to manufacturing, process control etc.
 

RPC

Member
Joined
Sep 7, 2006
Messages
1,626
Format
Multi Format
The mask isn't just a global color that can be removed with a filter or simple color correction sliders. Below are instructions I saw posted here that work well:

"The key step is pretty simple to do in Photoshop: sample the colour of the un-inverted negative rebate, make a new layer, fill the layer with the sampled colour, set blend mode to divide, flatten the layers, invert the image, clip RGB black & white points using warnings. Then fine colour adjustments & tonal balancing. The divide blending mode is essential - the mask is not a global colour - it's a mask that's formed inversely proportional to exposure & must be removed as such. If you do so, you're well on your way to manually matching how an optical print responds.

It takes considerably more time to describe than do! Main area of trouble people tend to have is judging how far to clip the individual black/ white points in each of the RGB channels in curves. Best solution I've found is to clip the black point till the rebate has a good black, and the white until just before it starts to clip in the image area. Other important thing is that black points must be set first. Far too often, the preset driven programmes are excessively aggressive with bp/ wp settings when compared to manual controls. In comparison, the Fuji Frontier (for example) tends to clip 'white' to outright white, then adjust the output back to an L of 95 amongst a whole series of other oddities that rather stifle the range of many films. It makes sense in the context of that sort of minilab, but if you want something more akin to what an optical print might deliver, I've found manual clipping to be significantly better. "

Don't know exactly what going on here but I do know that the orange color on a negative is NOT the MASK. It is the mask, PLUS what it is masking, dye impurities. From what I understand the mask is a positive orange image, and the impurites collectively form a negative orange image, and the two cancel, forming a UNIFORM orange color all over the negative, which can then be filtered out (happens when a print is made), and thus removes the impurities. If this is wrong, PE please correct.
 

StepheKoontz

Member
Joined
Dec 4, 2018
Messages
801
Location
Doraville
Format
Medium Format
forming a UNIFORM orange color all over the negative, which can then be filtered out (happens when a print is made).
What's going on with a optical print vs a computer aren't the same thing. I have tried doing normal color balancing to "filter" out the orange on scanned negs and I ended up fighting trying to get something close to decent. When I started using the instructions I posted, it was fairly simple to get nice results. Just my experience, YMMV.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
Would I then see the "true" color of the film as envisioned by Kodak engineers?

There is a way to do the that, Kodak Cineon is an example. This was originally and Analog to Analog system with a digital intermediate. It was also designed so that some of the production could be pure analog, mixed in with cineon footage with no huge difference that cinema goers would object to. But the steps are complex, and your understanding is away from where it needs to be. The book by Giorgianni and Madden will get you a lot closer. To model the whole system digitally from beginning to end is quite a task, and it was never done that way originally. Colour negative film was designed as a complete analogue to analogue closed loop system. You can of course skip a few steps or make a few fudges and get good enough result.

Most if not all scanning software, take a few short cuts rather than attempt to model the entire negative/positive system.
 
Last edited:

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
I have two questions. First, should the exposures from a DSLR used as a scanner or an actual scanner match the spectral response properties of the RA-4 paper? To be precise, I mean this in the following way. RA-4 paper has different speeds for each color. I can take a DSLR scan of a C-41 neg and look at the resulting data "linearly," meaning as a count of the photons of each color captured. Adjusting the "speed" then means adjusting the color channel. Using the film rebate as a neutral reference, I can multiply each channel by a certain coefficient, chosen individually for each channel, so that the rebate is black. My understanding is that this was accomplished in the analog world through color filters. After the filters were applied to RA-4, things then were (more or less) correct throughout the luminance range. So should I expect good neutrality throughout the range digitally after applying the per-channel multipliers? Posts in the another thread indicated that I should not, which makes me think the camera sensor is seeing things differently than RA-4 paper is.

You are correct about coefficients doing exactly what filtration does. However in the case of DSLR because of the way it is built it can never replicate the sensitivity of the paper. (at least without expensive modification). Even purpose built scanners which can be closer, don't typically match because they also are designed to scan chromes (in many cases that is the primary purpose)

For example Color paper, interpositive and internegative film for copying interpositives all have a blind spot which allows the use of a colour safelight that our our eyes, camera film and DSLRs can all see. Some modelling (or fudges) must be used to get around these problems.
 
Last edited:

Photo Engineer

Subscriber
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
Guys, you all take medicines for illnesses. Do you know how to make the chemicals in the medicines? Do you know the biochemical reactions of the medicines in the body? This is the equivalent. You are trying to understand complex chemical and color "reactions" with too little experience.

Ok, lets start. Using the film Red Dmin as zero, then the Green is about 0.8 and the Blue is about 1.2. Taking this to the paper, the Red speed is zero, the Green is +0.8 log E faster, and the Blue is +1.2 log E faster in this example. Add a 50R to this and the G is 1.3 faster and the B is 1.7 faster. Thus, when you print it with a tungsten bulb with a 50R, the image is "nominally" neutral and correctly balanced.

If you expose the film to R light and a step scale and to Neutral light and a step scale on 2 pieces of film, you get 2 different contrast values. This is due to the color correction of the mask and the DIR couplers among other things. The way to test the system is to make these on film and then print them onto paper and compare them to other coatings. Interestingly, if you expose to a macro step scale and a micro step scale, you also get 2 different contrasts and colors.

Two photos show examples of undercut exposures. There are actually about a dozen or more in a set. The next two illustrate the change in contrast between say a 4x5 image and a 35mm image of the same subject on the same film.

PE
 

Attachments

  • 32155c rgb.jpg
    32155c rgb.jpg
    34.9 KB · Views: 154
  • green undercut 2.jpg
    green undercut 2.jpg
    64.5 KB · Views: 154
  • Edge Effects.jpg
    Edge Effects.jpg
    139.5 KB · Views: 154
  • Micro Contrast.jpg
    Micro Contrast.jpg
    126.5 KB · Views: 150
OP
OP

Derek L

Member
Joined
Jul 28, 2019
Messages
49
Location
Boston
Format
Digital
You are correct about coefficients doing exactly what filtration does. However in the case of DSLR because of the way it is built it can never replicate the sensitivity of the paper. (at least without expensive modification).

I see. This suggests that fixing the speeds with the coefficients as well as possible then fiddling with the individual curves subjectively to get something pleasing to the eye probably has the best work/reward ratio of all correction methods. Because of the differing spectral responses it seems difficult to do better unless you want to use a spectrometer on a RA4 print of a ColorChecker, or something equally annoying.

Returning somewhat to the original topic, do you know where I could find information on the "shortcuts" or "fudges" the scanners take beyond simply adjusting per-channel exposure? This might aid my subjective fiddling.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
I see. This suggests that fixing the speeds with the coefficients as well as possible then fiddling with the individual curves subjectively to get something pleasing to the eye probably has the best work/reward ratio of all correction methods. Because of the differing spectral responses it seems difficult to do better unless you want to use a spectrometer on a RA4 print of a ColorChecker, or something equally annoying.

With just curves which are by definition 1D operation. i.e. one curve adjusts one colour. you can get reasonable results with a neutral across all your midtones, you can do this very fast and automated. To go beyond this you must do some type of 3D transformation. i.e. for each RGB combination you get a new RGB combination. Doing this even in just 8 bit per colour needs 16 million measurements... i,e. 255x255x255 measurements. So some compromise is needed.

Returning somewhat to the original topic, do you know where I could find information on the "shortcuts" or "fudges" the scanners take beyond simply adjusting per-channel exposure? This might aid my subjective fiddling.

The mask is trivial to remove, going beyond that needs some kind matrix or 3D transformation sometimes done with a 3LUT, but fundamentally it is a 3D transformation. Any of the controls like Hue adjustment or saturation are basically of this type.

But you would be well served to at least read the section on negatives in the Giorgianni and Madden text.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
Ok, lets start. Using the film Red Dmin as zero, then the Green is about 0.8 and the Blue is about 1.2. Taking this to the paper, the Red speed is zero, the Green is +0.8 log E faster, and the Blue is +1.2 log E faster in this example. Add a 50R to this and the G is 1.3 faster and the B is 1.7 faster. Thus, when you print it with a tungsten bulb with a 50R, the image is "nominally" neutral and correctly balanced.

This is easy to do to digitally, first you turn all your measurements into density values. So if your Dmin was already at zero, no further change would be required for red, but lets say it was 0.3 then just subtract 0.3 from all red density values, for green subject .8 or (.11) and for blue 1.2 or (.14), then convert all you measurements back from density values to pixel values and you have an reasonable correct image. The problems that remain is your measurements didn't actually match the density as the paper saw it, a neutral will not remain neutral across the midtones. A simplistic correction can be made using a 1D transformation but it is still not correct. Also you colour space model you used does not match the dyes of your print stock.

You don't of course need to compute log values for density but instead can work with the raw numbers, but it is easier to illustrate that way with reference to your example.
 
Last edited:
OP
OP

Derek L

Member
Joined
Jul 28, 2019
Messages
49
Location
Boston
Format
Digital
But you would be well served to at least read the section on negatives in the Giorgianni and Madden text.

I am happy to report that I have. There is the following interesting bit, which is not fully explained.

Very few image scanners measure negatives in terms of printing densities; most measure values that are closer to Status M densities. As a result, RGB scanned values alone are not accurate predictors of printing-density metamerism. For example, areas on two different negative films might have identical printing densities, but their RGB scanned values might indicate that the two areas are different.

Assuming the remark generalizes to DSLRs, it appears the "just multiply" correction method I outline above is too simplistic, and in fact more complicated per-channel corrections are needed to deal with the varying slopes shown by Status M densitometry, at the very least.

A simpllistic correction is can be made using a 1D transformation but it is still not correct.

Why? In the most naive possible model of optical printing, we're just adding the three layers independently, so it should suffice to manipulate and correct each individually to get the right result. Giorgianni and Madden do discuss crosstalk, but say this is canceled by the orange base. So I'm not quite sure why we should be introducing crosstalk digitally.

Even if we do, why doesn't it suffice to model each layer as having different sensitives to each primary (the desired primary plus two impurities) and then solve the resulting linear algebra problem to find the final "true" RGB values (it's just a 3x3 matrix...)? Of course there's still the issue of differing spectral sensitives between the DSLR and paper to contend with, but it seems this marginal increase in complexity and should model the printing crosstalk fairly well. (Again, where I assume the crosstalk is worth modeling and not canceled by the orange base. I'm not sure whether this is the case.)
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format

Not a simple answer, but consider the following:

Lets start by scanning a chrome or perhaps a frame from cinema print that was optically printed in the nineties. Imagine that the frame has three patches on it Red, Green and Blue. If we scan it, and we get the following values 50%,0%,0% and 0%,50%,0%, and 0%,0%,50% respectively for those patches, what colour have we scanned? The answer is we no idea, unless we know the values of the primaries that those values represent . I.e what Red, Green, Blue are being used to create those colours. If we using my laptop which has a screen which has been engineered to be close to the sRGB standard, then we can indeed say what the colour is colormetrically. Now those primaries used in my laptop are definitely not the same as my wide gamut monitor, and they are not the same primaries used in the chrome or print stock I scanned. Can my laptop screen match the same color, as the film, or my wide gamut monitor? For a subset of those colors yes it can exactly. For the subset it can match exactly how do you do so? Well you cannot match it with a 1D transformation, for each triplet there is at least 1 triplet using the different primaries that will give an exact match. i.e. a 3D transformation, sometimes performed by a 3x3 matrix or sometimes a 3D Look up table.

So you will need a 3D transform to at least model the paper.

Now back to the negative, since it not designed to viewed by the human eye there is no need for any specific color to represent the primaries, all you need is for those primaries to be "viewable by the print stock" you could for example have one dye that restricted UV light, one dye that restricted visible light and one dye that restricted infra red. Now if they could actually build such a system with widely separated dyes and print stock that could see these dyes, it would work perfectly and you would also not need any masking. You would not be able to scan the film with conventional DSLR though. The original technicolor actually just used three rolls of black and white film, so this concept has been used before.
The interaction between color negative and positive mimics part of this concept with the red sensitive layer. The Blue and Green layers of the print stock have similar spectral sensitivity to traditional camera negative, and are also close to spectral sensitivity of the Blue and Green layers of your DSLR. These are the layers that also have the masking, to reduce cross talk from adjacent dyes. The Red sensitive layer of the print stock is different from both negative film, a DSLR and anything designed to match the human eye in some way. It has for example a blind spot which allow the use of color safelight, right where the peak sensitivity of red sensel of your DSLR is and not far from the peak sensity of the cones in your eye sensitive to long wavelenght light. And the peak sensitivity of the red layer of the paper is where the infrared layer of your DSLR cuts our most of the light. Having the red sensitivity peak around 700nm reduces cross talk as the red layer cannot also have a mask. i.e. you can't have a three colour mask.

So every triplet from you DSLR is slightly wrong, This cannot be corrected just with just a 1D transformation, the 10bit cineon system can represent 107 million triplets, the 10bit log encoding was sized to fit dynamic range of color negative film. Now there is no need nor is it possible to measure the correction needed for all 107 million triplets, but you need more than 1D transformation using a curves to model the system.
 
Last edited:

Photo Engineer

Subscriber
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
This is easy to do to digitally, first you turn all your measurements into density values. So if your Dmin was already at zero, no further change would be required for red, but lets say it was 0.3 then just subtract 0.3 from all red density values, for green subject .8 or (.11) and for blue 1.2 or (.14), then convert all you measurements back from density values to pixel values and you have an reasonable correct image. The problems that remain is your measurements didn't actually match the density as the paper saw it, a neutral will not remain neutral across the midtones. A simplistic correction can be made using a 1D transformation but it is still not correct. Also you colour space model you used does not match the dyes of your print stock.

You don't of course need to compute log values for density but instead can work with the raw numbers, but it is easier to illustrate that way with reference to your example.

C41 and ECN dyes are not selected to be the best for the human eye as E6 dyes are. The negative films are selected to be the best match for the sensitivity of the paper, and the paper dyes are selected to be the best match for the human eye. The scanner has a fixed sensitivity, but it must adapt for both negatives and positives which complicate the situation.

In any event, the scanner I have has no problems with negatives or positives and I don't have to apply any corrections at all.

PE
 
Joined
Aug 29, 2017
Messages
9,376
Location
New Jersey formerly NYC
Format
Multi Format
C41 and ECN dyes are not selected to be the best for the human eye as E6 dyes are. The negative films are selected to be the best match for the sensitivity of the paper, and the paper dyes are selected to be the best match for the human eye. The scanner has a fixed sensitivity, but it must adapt for both negatives and positives which complicate the situation.

In any event, the scanner I have has no problems with negatives or positives and I don't have to apply any corrections at all.

PE
What scanner is that and what software do you use?
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom