I've been researching a new scanning method with a DSLR camera that uses 3 different light sources for each channel, vs one uniform (even if high quality) light pad. I have started investigating this due to shit-quality of all available scanners worsened by the C-caliber talent output from Lasersoft, yet I was unable to achieve satisfactory results with manual color inversion with DSLR scans.
Removing orange mask is a tiny (and most trivial) part of the inversion, and I was always puzzled by the fact that even after the mask removal, the image was always blue-ish, and adjusting it to gray point made colors look OK in the middle, but highlights/shadows would be fucкed up anyway. Once @Adrian Bacon mentioned that film emulsions have a different gamma for each color, I pulled the data sheets for a couple of Kodak emulsions and ho-lee-fuk indeed, each color basically has its own characteristic curve! I am now fully convinced there is no way in hell it's possible to hand-invert a color negative using crude tools like Photoshop under these circumstances.
I started researching how high-end scanners do it, and it quickly became apparent that they rely on dedicated light sources, i.e. they "expose" each emulsion layer in isolation from others. I began googling for "why"? Why not use a single-shot high quality light source. This blog post provided some answers, but I am still puzzled by this statement:
"... More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera..."
I do not get it. While I have ordered the book on color management mentioned in the article, I am still trying to make sense of the key sentence above. "Camera RGB space" makes no sense to me. Cameras do not have a color space, they record linear raw data from their sensor and you can pick any colorspace for the resulting jpeg/tiff as you wish, what is he talking about here? I kind of get channel crossover issue (which should be taken care of by the orange mask), but not the colorspace mismatch. And as a result of not understanding this, the rest of the article makes no sense: why would combining separate monochrome R/G/B images produce more accurate results? What's missing in a RAW file?
If any of you have seriously looked into this, do you mind sharing your thoughts?
Removing orange mask is a tiny (and most trivial) part of the inversion, and I was always puzzled by the fact that even after the mask removal, the image was always blue-ish, and adjusting it to gray point made colors look OK in the middle, but highlights/shadows would be fucкed up anyway. Once @Adrian Bacon mentioned that film emulsions have a different gamma for each color, I pulled the data sheets for a couple of Kodak emulsions and ho-lee-fuk indeed, each color basically has its own characteristic curve! I am now fully convinced there is no way in hell it's possible to hand-invert a color negative using crude tools like Photoshop under these circumstances.
I started researching how high-end scanners do it, and it quickly became apparent that they rely on dedicated light sources, i.e. they "expose" each emulsion layer in isolation from others. I began googling for "why"? Why not use a single-shot high quality light source. This blog post provided some answers, but I am still puzzled by this statement:
"... More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera..."
I do not get it. While I have ordered the book on color management mentioned in the article, I am still trying to make sense of the key sentence above. "Camera RGB space" makes no sense to me. Cameras do not have a color space, they record linear raw data from their sensor and you can pick any colorspace for the resulting jpeg/tiff as you wish, what is he talking about here? I kind of get channel crossover issue (which should be taken care of by the orange mask), but not the colorspace mismatch. And as a result of not understanding this, the rest of the article makes no sense: why would combining separate monochrome R/G/B images produce more accurate results? What's missing in a RAW file?
If any of you have seriously looked into this, do you mind sharing your thoughts?
Last edited: