RGB scanning

WPPD25 Self Portrait

A
WPPD25 Self Portrait

  • 1
  • 0
  • 23
Wife

A
Wife

  • 4
  • 1
  • 72
Dragon IV 10.jpg

A
Dragon IV 10.jpg

  • 4
  • 0
  • 78
DRAGON IV 08.jpg

A
DRAGON IV 08.jpg

  • 1
  • 0
  • 47

Recent Classifieds

Forum statistics

Threads
197,880
Messages
2,766,343
Members
99,495
Latest member
Brenva1A
Recent bookmarks
0

Bormental

Member
Joined
Mar 1, 2020
Messages
622
Location
USA
Format
Multi Format
I've been researching a new scanning method with a DSLR camera that uses 3 different light sources for each channel, vs one uniform (even if high quality) light pad. I have started investigating this due to shit-quality of all available scanners worsened by the C-caliber talent output from Lasersoft, yet I was unable to achieve satisfactory results with manual color inversion with DSLR scans.

Removing orange mask is a tiny (and most trivial) part of the inversion, and I was always puzzled by the fact that even after the mask removal, the image was always blue-ish, and adjusting it to gray point made colors look OK in the middle, but highlights/shadows would be fucкed up anyway. Once @Adrian Bacon mentioned that film emulsions have a different gamma for each color, I pulled the data sheets for a couple of Kodak emulsions and ho-lee-fuk indeed, each color basically has its own characteristic curve! I am now fully convinced there is no way in hell it's possible to hand-invert a color negative using crude tools like Photoshop under these circumstances.

I started researching how high-end scanners do it, and it quickly became apparent that they rely on dedicated light sources, i.e. they "expose" each emulsion layer in isolation from others. I began googling for "why"? Why not use a single-shot high quality light source. This blog post provided some answers, but I am still puzzled by this statement:

"... More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera..."

I do not get it. While I have ordered the book on color management mentioned in the article, I am still trying to make sense of the key sentence above. "Camera RGB space" makes no sense to me. Cameras do not have a color space, they record linear raw data from their sensor and you can pick any colorspace for the resulting jpeg/tiff as you wish, what is he talking about here? I kind of get channel crossover issue (which should be taken care of by the orange mask), but not the colorspace mismatch. And as a result of not understanding this, the rest of the article makes no sense: why would combining separate monochrome R/G/B images produce more accurate results? What's missing in a RAW file?

If any of you have seriously looked into this, do you mind sharing your thoughts?
 
Last edited:

Lachlan Young

Member
Joined
Dec 2, 2005
Messages
4,850
Location
Glasgow
Format
Multi Format
Essentially all colour neg films have closely matching aim gamma or average contrast gradient (the steepness of the curves) - if the gradient differs, it's usually for good reasons related to colour reproduction.

If the mask is divided out with the correct colour(preferably in a separate layer), any instances of the colour should go to '1' - ie the rebate should go white (pre-inversion). The next steps are fairly simple: I add an inversion layer, then a curve layer and (using the clipping warnings) bring in the black point until the rebate is solid/ deepest shadows are just clipping to black - and then the white point until either just before the highlights clip or about 150 (RGB units I think) from the black value I set if the scene is flatter. You shouldn't need to do individual channel clipping on each of RGB unless there is a crossover problem you are trying to solve. At this point the image may have a slight colour cast, which if everything is on aim, should be solvable with a curves layer & a small tweak of the mids of the relevant colour curve to neutralise it - in other words it should be a single overall colour cast, not one that has crossover. From here, it's a question of adding curves, masks etc to taste - the biggest thing is learning how to break apart a colour cast into a set of colour corrections so that you know which colours to add/ subtract.
 
OP
OP

Bormental

Member
Joined
Mar 1, 2020
Messages
622
Location
USA
Format
Multi Format
Lachlan, I have a suspicion that you're working with a RAW file from a scanner, not from a camera. Tonight I will take another look at the sample you sent me earlier, but the article I linked claims that scanners do not have the problem of channel curves misalignment, only DSLRs do.

What you're saying above does not work for DSLR inversion. I have been trying for months. I blamed my skills, but eventually rebelled against this - I have been doing digital image editing my whole life. So I started looking, and found this article. What you're describing should work in theory with an un-inverted RAW scan, because a good scanner keeps CYM curves parallel to each other by exposing them separately with three different LEDs. If so, why does it work? The guy says "because camera color space does not match film color space" and my head explodes reading that statement. I am no color scientist but it goes against everything I know about color spaces. They are just symbols we use to describe reality. Camera and film are two realities, both can be described using any color space, and what do separate LEDs have to do with color spaces?
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I am now fully convinced there is no way in hell it's possible to hand-invert a color negative using crude tools like Photoshop under these circumstances.

exactly why I started down the path I did. Photoshop is an amazing piece of software, but not well suited to film scanning.
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Lachlan, I have a suspicion that you're working with a RAW file from a scanner, not from a camera. Tonight I will take another look at the sample you sent me earlier, but the article I linked claims that scanners do not have the problem of channel curves misalignment, only DSLRs do.

What you're saying above does not work for DSLR inversion. I have been trying for months. I blamed my skills, but eventually rebelled against this - I have been doing digital image editing my whole life. So I started looking, and found this article. What you're describing should work in theory with an un-inverted RAW scan, because a good scanner keeps CYM curves parallel to each other by exposing them separately with three different LEDs. If so, why does it work? The guy says "because camera color space does not match film color space" and my head explodes reading that statement. I am no color scientist but it goes against everything I know about color spaces. They are just symbols we use to describe reality. Camera and film are two realities, both can be described using any color space, and what do separate LEDs have to do with color spaces?

if you look at the actual raw samples of a dslr, you’ll very quickly discover that the channels do not have the same sensitivity. The green channel is way more sensitive than the red or blue channel, and the red and blue channels are usually tuned to have the same amount of sensitivity somewhere between 4000K and 6000K. If you’re trying to invert a negative from a dslr scan after its already been turned into an RGB tiff file, you also have the added complexity of the rggb to rgb debayering process along with the unmanaged raw color to managed colorspace transform to contend with, along with whatever gamma encoding that colorspace uses. In short, not a lot of fun, which is why the software I wrote does all its work on the raw bayer array samples in raw unmanaged color. I don’t debayer. I get the samples looking like they’ve been shot as a positive image and I just have a set of raw bayer samples from a digital camera. That way Adobe’s camera raw engine can just do its thing and all I have to do make sure the scanned film looks like it is a positive image from a digital sensor.
 
OP
OP

Bormental

Member
Joined
Mar 1, 2020
Messages
622
Location
USA
Format
Multi Format
@Adrian Bacon I have bookmarked your explanation of simple image tools, and it's been helpful to understand what's going on. However, I do not have Simple Image Tools :smile: I am trying to get as close as possible without having to build my own simple image tools. Currently my workflow is as follows:
  • Have a CRI 95 light source with a known white balance of 5000K
  • Tether camera to a computer, shoot RAW
  • Export RAW as a 16-bit linear TIFF (gamma 1)
  • Fuji color profile is turned off. I tried "no color correction" profile or "Adobe RGB" profile without seeing significant difference.T
  • Try to invert via the usual mask sampling + divide and curve play.
I have been having a HARD TIME with this. Usually there's a strong blue/cyan cast and it's really hard to get rid of that evenly in shadows/midtones/highlights.

Until Lachlan gave me a tip of NOT setting the white balance to 5000K prior to RAW -> TIFF export. I am getting somewhat more predictable results now, but something is still off, I suspect I'm losing something during RAW -> TIFF conversion. I am aware of having the double of green pixels. I was hoping that setting "Linear" profile for TIFF export accounts for it.

Also, the original blog post explanation above still does not make any sense to me:

"... More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera..."

How is making separate R/G/B exposures different from making one sliced across a bayer array? How's orange mask defined in CMY suddenly becomes invalid in RGB?
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
@Adrian Bacon I have bookmarked your explanation of simple image tools, and it's been helpful to understand what's going on. However, I do not have Simple Image Tools :smile: I am trying to get as close as possible without having to build my own simple image tools. Currently my workflow is as follows:
  • Have a CRI 95 light source with a known white balance of 5000K
  • Tether camera to a computer, shoot RAW
  • Export RAW as a 16-bit linear TIFF (gamma 1)
  • Fuji color profile is turned off. I tried "no color correction" profile or "Adobe RGB" profile without seeing significant difference.T
  • Try to invert via the usual mask sampling + divide and curve play.
I have been having a HARD TIME with this. Usually there's a strong blue/cyan cast and it's really hard to get rid of that evenly in shadows/midtones/highlights.

Until Lachlan gave me a tip of NOT setting the white balance to 5000K prior to RAW -> TIFF export. I am getting somewhat more predictable results now, but something is still off, I suspect I'm losing something during RAW -> TIFF conversion. I am aware of having the double of green pixels. I was hoping that setting "Linear" profile for TIFF export accounts for it.

Also, the original blog post explanation above still does not make any sense to me:

"... More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera..."

How is making separate R/G/B exposures different from making one sliced across a bayer array? How's orange mask defined in CMY suddenly becomes invalid in RGB?

hang on a second... you’re using a Fuji camera? With an xtrans sensor? You’re getting double whammied. That ain’t a bayer array, it has even less red and blue samples than a bayer array. It’s mostly green.

the camera rgb space is nothing more than the cameras native color response. The author used a poor choice of words. Just like the CIE 1931 color model represents human vision, a digital camera would have a similar thing if it’s color response was mapped out. You could call it a color space, but it’s really just the cameras color response. The job of the raw processor is to take that and transform it to an actual colorspace that the rest of the imaging chain knows what to do with.
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
More importantly, because you’re representing those values in the camera RGB space, the color correction offset for the color leakage in the negative dyes is no longer accurate, because it was created relative to the CMY primaries in the color negative, not the RGB primaries in your camera

Thinking about this more, that's a load of crap. If it were true, then an enlarger wouldn't work right as it uses a full spectrum light source....
 

Lachlan Young

Member
Joined
Dec 2, 2005
Messages
4,850
Location
Glasgow
Format
Multi Format
Thinking about this more, that's a load of crap. If it were true, then an enlarger wouldn't work right as it uses a full spectrum light source....

There's an important caveat here: the print-through characteristics of the film. The dyes are intended to work correctly when exposed with a 3200K illuminant (+/- an unknown amount of K) + 50R, or a set of sequential exposures through RGB filters, which when exposed correctly (as long as the illuminant is reasonably full spectrum) are going to end up back at that point. They aren't designed to be exposed uncorrected with a 5000-5500K source.
 

grat

Member
Joined
May 8, 2020
Messages
2,045
Location
Gainesville, FL
Format
Multi Format
There's an open source tool called Darktable (Based on Lightroom, but I believe they've gone different directions) which just released version 3.2, and includes a tool called NegaDoctor. No bonus points for guessing what it does. The tool isn't exactly a hand-holder, but the basic workflow is to set the white balance first, set the film mask color, then either let it automatically find highlights / shadows, or manually pick them yourself.

My testing so far suggests that it does a fair job of color correcting, even though I used a DSLR, auto-white balance, and a questionable light source (cheap, low CRI LED 'tracing' board-- fine for viewing negatives, but not so great for color negatives).

Tutorial / demonstration here:

 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
There's an important caveat here: the print-through characteristics of the film. The dyes are intended to work correctly when exposed with a 3200K illuminant (+/- an unknown amount of K) + 50R, or a set of sequential exposures through RGB filters, which when exposed correctly (as long as the illuminant is reasonably full spectrum) are going to end up back at that point. They aren't designed to be exposed uncorrected with a 5000-5500K source.

The dyes of the film or the paper? Full spectrum is full spectrum. The Kelvin is just the power relationship between the amber and blue. There's a whole other axis (green magenta) that most people totally miss. The Kelvin scale is actually a curved line along both axis. This is why when you see it superimposed on a CIE horseshoe, it's not straight, but curved. The green/magenta has a different power ratio at each kelvin point the same way that the amber/blue does.

All that aside, the paper is just optimized to work best with the light source at ~3200K because that was what was available at the time and it became the defacto standard and never changed. If the standard light was full spectrum daylight, the paper would probably have been optimized for that, and worked just fine.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
52,227
Location
Delta, BC Canada
Format
Medium Format
Clearly someone needs to design a sensor and processor and firmware combination that is designed to do two things well - extract information from a masked colour negative illuminated by a continuous spectrum source, and extract information from a colour transparency illuminated by a continuous spectrum source.
And then build it into something that will hold a variety of film formats flat, image them with very high resolution and excellent contrast, and permit easy and quick frame to frame movement.
 

mshchem

Subscriber
Joined
Nov 26, 2007
Messages
14,376
Location
Iowa City, Iowa USA
Format
Medium Format
I never have had the patience for scanning color negatives. I scan 35mm slides with a nice little Nikon Coolscan with Vuescan software. I have a cheap Canon scanner for 6x17 chromes, still use Vuescan.
Matt has said it. Seems like there's individual bits out there but no one has put it all together. There's minilab scanners, they go up to 6x7, and they are still a pain.
If I want color prints I either print analog prints or shoot dig. and print it out with my inkjet printers.

I still find analog color printing less mind and butt numbing than trying to get film scanned and adjusted etc.
 

Lachlan Young

Member
Joined
Dec 2, 2005
Messages
4,850
Location
Glasgow
Format
Multi Format
The dyes of the film or the paper? Full spectrum is full spectrum. The Kelvin is just the power relationship between the amber and blue. There's a whole other axis (green magenta) that most people totally miss. The Kelvin scale is actually a curved line along both axis. This is why when you see it superimposed on a CIE horseshoe, it's not straight, but curved. The green/magenta has a different power ratio at each kelvin point the same way that the amber/blue does.

All that aside, the paper is just optimized to work best with the light source at ~3200K because that was what was available at the time and it became the defacto standard and never changed. If the standard light was full spectrum daylight, the paper would probably have been optimized for that, and worked just fine.

The amber/blue vs green/ magenta aspect is (I'd hope) very obvious to anyone who has used a colour meter.

You might want to have a look at the spectral dye density curves for a colour neg (for exposure with 3200K & 50R) and the spectral dye density of an E-6 film (for viewing at 5000K) - the negative won't present a problem if exposed through separation filters (because you'll end up back at 3200K + 50R effective), but looks like it is going to wander off and produce some odd results if exposed with a continuous spectrum illuminant with a lot of blue & not much red light in it - you're potentially going to exaggerate the differences & not help them. For obvious reasons, not a problem in a 3xCCD, PMT or non CFA sensor (if the latter exposed via sequential RGB LED etc).

There's no reason why a colour neg film couldn't be optimised for exposure using a 5000K illuminant & suitable filtration, but the dye densities would need likely need to be adjusted. Continuous spectrum illumination + filtration is theoretically not as good as exposing through tight bandpass RGB filters, but it is a whole lot easier for the average operator to do.

The
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Clearly someone needs to design a sensor and processor and firmware combination that is designed to do two things well - extract information from a masked colour negative illuminated by a continuous spectrum source, and extract information from a colour transparency illuminated by a continuous spectrum source.
And then build it into something that will hold a variety of film formats flat, image them with very high resolution and excellent contrast, and permit easy and quick frame to frame movement.

the hardware part is reasonably straightforward if using off the shelf stuff, though I doubt anybody would want to actually pay for it, and would rather substitute their own camera/lens combination, thus significantly increasing the complexity of supporting hardware on the software side. Just supporting being able to read a fraction of the raw file formats available and then handling their specific response to the films with a very specific light source is a tremendous amount of work. I’ve gone through this exercise already going from the Canon 80D to the 90D in my own rig and can say that it’s not a trivial amount of effort on the software side, and that’s with one camera manufacturer. I have many hundreds of hours spent in that transition from the 80D to the 90D, mostly because the raw file format changed between the two, only to discover that the 90D also has a slightly different response than the 80D, thus again causing me to create another generic c-41 profile specifically for it and manually going and tweaking things whenever somebody sends in a film I’ve not seen/handled yet on the 90D. And this is all while using control strips to keep my process in spec. Having to deal with c-41 that somebody went and developed at home, or from some other lab that isn’t necessarily in spec is a whole other ball of wax. I get already processed film sent in for scanning all the time, and that always requires manually going in and tweaking things for every single roll, even though I already have a profile for that emulsion. It’s a lot of work.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
52,227
Location
Delta, BC Canada
Format
Medium Format
I wonder if the film manufacturers could be persuaded to put control strip like exposures into the rebate of every film?
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I wonder if the film manufacturers could be persuaded to put control strip like exposures into the rebate of every film?

That would be nice, but how to handle the deviations? In the control strips I've used, they put in a slip of paper with deviations that you're supposed to combine with the reference strip readings.

What would be helpful would be a correct exposure of a white balance card with full spectrum 5500K, along with a +2 and -2 exposure of a grey card with every roll. If you could manage it in one frame, or in between the sprocket holes, that would be immensely useful. It would give the lab a very easy white balance reference, and contrast reference. Doing a simple reading of that would tell you volumes about how that film was processed and how to handle it. A full control strip would be great, but isn't necessary for the purposes of scanning it.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom