• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Tri-Color Narrow Band RGB DSLR scanning

Recent Classifieds

Forum statistics

Threads
203,213
Messages
2,851,530
Members
101,729
Latest member
Luis Angel Baca
Recent bookmarks
0
It would be relatively trivial for a programmer with an image processing package to be able to combine the R G B of 3 images. I had Gemini write one real quick for Python.

from PIL import Image
import os

def combine_channels():
# Define filenames
r_path = 'red.tif'
g_path = 'green.tif'
b_path = 'blue.tif'
output_path = 'rgb.tif'

# Check if all files exist to avoid errors
for file in [r_path, g_path, b_path]:
if not os.path.exists(file):
print(f"Error: Could not find {file} in the current folder.")
return

try:
# Open the source images and convert to grayscale (L mode)
# just in case they aren't already formatted correctly
r = Image.open(r_path).convert('L')
g = Image.open(g_path).convert('L')
b = Image.open(b_path).convert('L')

# Merge the three images into one RGB image
rgb_image = Image.merge('RGB', (r, g, b))

# Save the result
rgb_image.save(output_path)
print(f"Success! Saved combined image as {output_path}")

except Exception as e:
print(f"An error occurred: {e}")

if __name__ == "__main__":
combine_channels()
 
Last edited:
As far as I know, Grain2Pixel plugin has support for separate R G B scans.
 
The big scanlight is actually quite deep compared to other lights I have seen. 5cm roughly. I believe it was designed this way specifically to eliminate poor light mixing. Jack said that it could maybe be caused by IR light reflecting off the lens or holder, but when taking these sample images I had the camera well away from the light source such that any reflection would be negligible.

Maybe it is a faulty device. One other user on the NLP forum told me they saw a similar effect which was reduced by lifting the film plane farther from the surface of the light. I purchased some NR glass to elevate the holder a bit, but it has yet to arrive.

I have the same issue with my Big Scanlight. Jack told me it is likely caused by internal IR reflections in the lens as well. In actual practice this isn’t an issue unless the neg is extremely underexposed, but it still bothers me. Otherwise, this light is the biggest upgrade in color image quality I have made to my system. My Ektar scans actually look like slide film now.
 
I have the same issue with my Big Scanlight. Jack told me it is likely caused by internal IR reflections in the lens as well. In actual practice this isn’t an issue unless the neg is extremely underexposed, but it still bothers me. Otherwise, this light is the biggest upgrade in color image quality I have made to my system. My Ektar scans actually look like slide film now.

I find that it is present even in thick negatives, albeit nearly unnoticeable until you apply a correction via FFC or radial mask and compare the difference. I have found that a radial mask is effective enough at eliminating it without introducing major color shifts but it requires some tweaking to balance correctly. Faster and easier than batch FFC, particularly in lightroom.

I got some NR glass but am only able to easily elevate the film plane around .5cm which does not reduce the vignette whatsoever. I need to figure out a way to elevate it further but that may be more trouble than it's worth.

I have also noticed an increased prevalence of magenta in the shadows vs white light that needs to be corrected for. This can be quite frustrating and throw off the color balance of the image. In some cases I cannot fully eliminate it at all.
 
How are you creating the radial mask, and does it account for the color difference?

I have not had an issue with magenta shadows. Are you using NLP, or doing manual inversion in PS?

Also, what camera and lens are you using?
 
How are you creating the radial mask, and does it account for the color difference?

I shot a blank frame of just the light surface and adjusted size, feathering, exposure and tint to compensate for the vignette as best as possible. I have found that adding the mask after NLP conversion looks a bit better.

I have not had an issue with magenta shadows. Are you using NLP, or doing manual inversion in PS?

NLP for almost everything. I am going to try some other automatic conversion tools and compare. Notably Capture One just released built in negative conversion.
Also, what camera and lens are you using?
Nikon D810 w/ 60mm 2.8D micro
 
I shot a blank frame of just the light surface and adjusted size, feathering, exposure and tint to compensate for the vignette as best as possible. I have found that adding the mask after NLP conversion looks a bit better.



NLP for almost everything. I am going to try some other automatic conversion tools and compare. Notably Capture One just released built in negative conversion.

Nikon D810 w/ 60mm 2.8D micro

I find that NLP requires quite a bit of manual white balance with the RGB light compared to flash. It does easily snap into place for me, though.

Have you tried negating the orange mask using the RGB controls on the light and then manually inverting? This has worked well for me.
 
I find that NLP requires quite a bit of manual white balance with the RGB light compared to flash. It does easily snap into place for me, though.

Yes, there seems to always be an excessive magenta cast that is usually easily corrected with a few clicks on the tint slider. However, the shadows sometimes remain problematic in my case.
Have you tried negating the orange mask using the RGB controls on the light and then manually inverting? This has worked well for me.
I have not fiddled too much with balancing the individual luminance on each channel, but I do find that skipping white balancing the mask in LR gives a better result. Specifically reducing the tendency for over saturation of greens and reds.

What adjustments work for you to get the mask balanced? Does this also benefit the conversion via NLP? I found that NLP basically overrides the individual adjustments and gives me the same result unless the luminance levels of a certain channel are heavily reduced. Then things get wonky.
 
Yes, there seems to always be an excessive magenta cast that is usually easily corrected with a few clicks on the tint slider. However, the shadows sometimes remain problematic in my case.

I have not fiddled too much with balancing the individual luminance on each channel, but I do find that skipping white balancing the mask in LR gives a better result. Specifically reducing the tendency for over saturation of greens and reds.

What adjustments work for you to get the mask balanced? Does this also benefit the conversion via NLP? I found that NLP basically overrides the individual adjustments and gives me the same result unless the luminance levels of a certain channel are heavily reduced. Then things get wonky.
It varies by film, but for Ektar, I reduce red a little to 190, leave green all the way up, and blue goes way down to 95. This gives me a nearly perfectly gray base. I can simply invert and the color needs very little tweaking.

You’re right, though - with NLP it doesn’t really matter. It makes a bigger difference with manual inversion.
 
As I have been narrowband composite scanning my color negatives for the last 3 years I now have an application under work that solely focusses on discrete narrowband R,G,B composite scanning. Supports flat-field correction, custom print-film emulation, is fully color managed and supports batch processing. If anyone is interested in testing please let me know. MacOS and Windows versions are available...
Bildschirmfoto 2026-03-30 um 00.20.56.jpg
 
As I have been narrowband composite scanning my color negatives for the last 3 years I now have an application under work that solely focusses on discrete narrowband R,G,B composite scanning. Supports flat-field correction, custom print-film emulation, is fully color managed and supports batch processing. If anyone is interested in testing please let me know. MacOS and Windows versions are available...View attachment 421203

Nice to see some progress in this area. Are you doing anything to manage the saturation that seems to be a challenge with narrow band scanning? In my experience working with raw scans from Coolscans and Pakons the color separation tends to be too high and some colors can go a bit neon like. Strong reds being the most problematic. The same problem exists with camera scanning from what I have seen.
 
Nice to see some progress in this area. Are you doing anything to manage the saturation that seems to be a challenge with narrow band scanning? In my experience working with raw scans from Coolscans and Pakons the color separation tends to be too high and some colors can go a bit neon like. Strong reds being the most problematic. The same problem exists with camera scanning from what I have seen.

As long as RAW conversion and the following selective extract and merge is done properly the resulting composite should already look similar to Log in Rec.709 (sRGB) by default. Only then saturation is implicitly increased alongside contrast through Print Film Emulation and custom S-curves if desired.

So 3-shot discrete narrowband capturing should not end up in fighting against saturation. Actually the opposite would be more to be expected.
 
Last edited:
So 3-shot discrete narrowband capturing should not end up in fighting against saturation. Actually the opposite would be more to be expected.

This echoes my experience.
Problems with excessive saturation on pure colors suggests clipping or shouldering-off of any of the color channels somewhere in the imaging chain.
 
As long as RAW conversion and the following selective extract and merge is done properly the resulting composite should already look similar to Log in Rec.709 (sRGB) by default. Only then saturation is implicitly increased alongside contrast through Print Film Emulation and custom S-curves if desired.

So 3-shot discrete narrowband capturing should not end up in fighting against saturation. Actually the opposite would be more to be expected.
I think I explained this a bit unclearly, since it’s a tricky effect to pin down.

What I’m seeing isn’t just higher overall saturation (although narrowband RGB will definitely give a more saturated result than white light). It’s more that certain hues, especially reds and oranges, end up looking unnaturally intense compared to the rest of the image.

So instead of global saturation, it feels more like the channels are “too cleanly separated,” which makes some colors stand out in a way that doesn’t look natural.

The best explanation I’ve come across is from the Filmeon project:
https://github.com/helios1138/filmeon?tab=readme-ov-file#the-color

They describe how with narrowband RGB illumination, you’re measuring something very close to the actual dye densities (“analytical density”), which sounds ideal, but in practice can lead to overly pure separation between channels. That seems to match what I’m seeing, where certain colors (especially reds) become too dominant.

With broadband light, the channels are already mixed (imperfectly), so you don’t get that same effect, but then you’re dealing with a different kind of inaccuracy.

If you scroll down, there are examples showing the same image scanned under different light sources and how adjusting the channel mixing changes the result. It’s the only inversion software I’ve seen so far that explicitly tries to deal with this, and the results look pretty convincing.

Curious if this lines up with what others are seeing with narrowband setups. I don’t have a camera scanning setup, but it matches what I see when inverting Coolscan and Pakon raw files, which both use narrowband LEDs.
 
@Fish soup : If I got your linked article correctly then he is referring to narrowband but single,"white appearing spectrum" capture with all 3 LED channels on at the same time. I find this approach rather troublesome in regards to getting color right. I have indeed seen wonky oversaturated colors like described with that approach in the past. Discrete, sequential captures under red, green and blue light to me have the better prospect for color predictability. In this case we are treating our camera more like a monochrome densitometer. Similar to how Frontier and ArriScan work. Less variables in the equation benefit color fidelity and consistency.

I've processed discrete R,G,B triplets from many community members throughout the last years with solid results. Not a single time I experienced over-saturation in the resulting composite. Regardless of what camera was used for digitization (and there were many).
 
Last edited:
If you scroll down, there are examples showing the same image scanned under different light sources and how adjusting the channel mixing changes the result.

If we're talking about examples like this one:
1774889017154.png

Notice that this illustrates (at least as shown on the website, in 8-bit mode) the problem I mentioned in e.g. the yellow flower. There's no contrast in it, in any of the channels. I expect that in the 16-bit file they were working with this problem doesn't exist. As to saturation, I'm not convinced (at all!) that there's a problem here. Yes, the other captures look more washed out.

In any case, what this mostly implies is that there are quite fundamental questions underlying such assessments of what is desirable, or even 'correct'. You'll notice that regardless of what approach you choose, there's always a 'signature' of the process + materials percolating into the end result. However, to a large extent this can be modified in digital post processing.

In the example you linked to, I'm not sure that we're seeing the same problem. I suspect that most of the captures except the 3-color one suffer from relatively washed-out colors. Not all that much of a problem since they can be reconstructed as desired. The yellow flower is a problem, and it's present in 2 out of 4 captures, with the other ones not affected. Again, I suspect this is a problem possibly/likely related to capture and posterization due to bit-smashing back into 8-bit space for web display.
 
@mwde

It’s a bit hard to decipher. I thought he was using both methods in the demo with multiple light sources, RGB LED (white iPad screen) and a 3-color scan (using iPad P3 1000-nit color patches, with a single sensor channel per exposure). But I might be interpreting that incorrectly.


I’m pretty sure my Coolscan works the way you describe for the Arri and the Frontier. It uses a mono sensor and flashes each scanline sequentially with RGB LEDs, and I can see the effect there. That said, I’m not doubting that you’re not seeing any such effect with your setup.



 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom