An alternative to Negative Lab Pro and Lr has to exist (C-41 reversal and orange mask removal)?!

What is this?

D
What is this?

  • 0
  • 1
  • 13
On the edge of town.

A
On the edge of town.

  • 7
  • 4
  • 146
Peaceful

D
Peaceful

  • 2
  • 12
  • 305
Cycling with wife #2

D
Cycling with wife #2

  • 1
  • 3
  • 110

Recent Classifieds

Forum statistics

Threads
198,274
Messages
2,772,183
Members
99,588
Latest member
svd221973
Recent bookmarks
0

Tom Kershaw

Subscriber
Joined
Jun 5, 2004
Messages
4,974
Location
Norfolk, United Kingdom
Format
Multi Format
Whilst I'm definitely not a fan of the subscription model for software, at the same time I recognise that paying 10 quid a month for PS + Lightroom really isn't that big a deal. At those rates you are getting 5-6 years of use for the same one-off price it would have cost you to outright buy PS and LR separately.

This is true. However the computer I use with my film scanner is nearly 10 years old and I'm using 3-6 year old software on it. Everything works more or less, and I'd prefer to be in control of any upgrades or spec changes rather than being forced into it.
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
While I have found this thread very informative, I'm not sure how productive is the evolution of this thread to a discussion of software pricing models. Disclosure: I am a Lightroom subscriber but I resisted for almost a year after the announcement of the subscription-only pricing model.
 
OP
OP
Helge

Helge

Member
Joined
Jun 27, 2018
Messages
3,938
Location
Denmark
Format
Medium Format
It’s simple human psychology.

Some things we are ok with or even prefer having small incremental advances. One of the main reasons TV series are so popular now, as a general means of entertainment is that you don’t feel cornered and caught in being forced to watch a two hour film. Yet you often end up watching much more than two hours, exactly because you can potentially stop whenever.
Same reason conversation seems to flow so much better and easily, at an obligatory planned dinner party on your way home in the doorway, than the hours spend at the table.

Other things conversely we just want to get over with, like pulling off a bandaid so we can forget about it and enjoy life again.
Subscriptions to fitness gyms and library books is negative examples of this. It just gets stressful with the constant pressure to use the potential you paid for, or the time ticking in the back of your mind, that it takes the fun and motivation out of it.

That’s also the way it is with software. Often it does some pretty mundane, seemingly simple and unsexy, but very useful stuff.
We can understand with our intellect that someone used a lot of time making it, therefore we pay up.
But other parts of our brain thinks about the other stuff, with immediate social currency and enjoyment the money could have bought.
Having to pay a bit more every say, three or five years, I think is simply more compatible with how the human mind works.

Another thing is that everybody today wants a permanently installed sucking trunk down you wallet. It’s become a thing.
It very quickly gets out of control.
It’s just not a payment model compatible with every intangible and perpetually consumable.
 
Last edited:

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
I don't post much (or really at all) on this forum, but I've been absorbing a lot from the discussions on here, and I just wanted to pop in to say: Adrian, it would be awesome if you shared a basic implementation of your process github. Or really just some of the technical implementation details, for other programmers (like myself) to work from. My sense is that a lot of the value of the work you've done is not in the application code, but rather than data you've collection on the various emulsions. That would take someone else a lot of time and effort to duplicate.

I also have another, unrelated question about something I've discovered as I've been taking apart some (broken) scanners. Every high-end scanner I've been able to poke around in (Coolscan, Noritsu, Frontier) uses RGB LEDs in very specific wavelengths. Motion picture film scanners and drum scanners use the same strategy, except they use dichroic mirrors and filters to separate the RGB wavelengths rather than LEDs. I've read some speculation that this is related to the spectral sensitivity of RA4 paper, and is also a strategy for completely omitting the orange mask in the resulting scan, since the spectrums the scanners are capturing completely omit orange wavelengths. From what I've seen, they all use a blue at around 440-460, green at around 525-540, and a red from 640-660. I've been playing around with some LED arrays in those wavelengths, and the results are... interesting. I wonder if this is an avenue worth exploring. Maybe instead of using software to correct a colorimetric approach to DSLR scanning, we can save time by using a densitometric approach, which is what I understand the RGB sampling method to be.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I don't post much (or really at all) on this forum, but I've been absorbing a lot from the discussions on here, and I just wanted to pop in to say: Adrian, it would be awesome if you shared a basic implementation of your process github. Or really just some of the technical implementation details, for other programmers (like myself) to work from. My sense is that a lot of the value of the work you've done is not in the application code, but rather than data you've collection on the various emulsions. That would take someone else a lot of time and effort to duplicate.

I also have another, unrelated question about something I've discovered as I've been taking apart some (broken) scanners. Every high-end scanner I've been able to poke around in (Coolscan, Noritsu, Frontier) uses RGB LEDs in very specific wavelengths. Motion picture film scanners and drum scanners use the same strategy, except they use dichroic mirrors and filters to separate the RGB wavelengths rather than LEDs. I've read some speculation that this is related to the spectral sensitivity of RA4 paper, and is also a strategy for completely omitting the orange mask in the resulting scan, since the spectrums the scanners are capturing completely omit orange wavelengths. From what I've seen, they all use a blue at around 440-460, green at around 525-540, and a red from 640-660. I've been playing around with some LED arrays in those wavelengths, and the results are... interesting. I wonder if this is an avenue worth exploring. Maybe instead of using software to correct a colorimetric approach to DSLR scanning, we can save time by using a densitometric approach, which is what I understand the RGB sampling method to be.

thank you, I’ve been considering it.

re: scanner LEDs, that’s actually more to do with conforming it to a color space than anything else. Whether the orange mask is there or not has more to do with the strength of each LED relative to each other. Using specific known wavelengths as the light source just makes it easier to do the raw scanner sample RGB to XYZ transform so that it can then be transformed to its destination color space (I.e. sRGB if jpeg). XYZ is the universal connection space that everything gets converted to before being transformed to a specific smaller color space. Even DSLRs have a raw rgb to XYZ matrix that is tuned to the specific CFA characteristics of that particular camera. The DNG spec lists it all out and how it’s used, and if you convert a raw file to DNG, you can use exiftool to inspect the entire contents of all the metadata (including said matrix) in the file. It’s very enlightening.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
thank you, I’ve been considering it.

re: scanner LEDs, that’s actually more to do with conforming it to a color space than anything else. Whether the orange mask is there or not has more to do with the strength of each LED relative to each other. Using specific known wavelengths as the light source just makes it easier to do the raw scanner sample RGB to XYZ transform so that it can then be transformed to its destination color space (I.e. sRGB if jpeg). XYZ is the universal connection space that everything gets converted to before being transformed to a specific smaller color space. Even DSLRs have a raw rgb to XYZ matrix that is tuned to the specific CFA characteristics of that particular camera. The DNG spec lists it all out and how it’s used, and if you convert a raw file to DNG, you can use exiftool to inspect the entire contents of all the metadata (including said matrix) in the file. It’s very enlightening.

Ah, interesting. I think some of this is actually starting to come together for me. I've been reading the Giorgianni and Madden book about digital color management, specifically their appendix regarding the orange mask in color negative film, and it makes a lot of what you've been outlining clear. I've read a lot about how the color coupler dyes vary inversely with the density of the magenta and cyan dyes, which leads some to conclude that in order to negate those dyes you need some sort of non-linear mask, but those coupler dyes are intended to cancel out the cross-talk of the unwanted absorption in the magenta and cyan layers, which means the overall affect of the coupler dyes + the cross-talk is a linear amount of extra magenta-yellow (from the cyan layer) and yellow (from the magenta layer), which can be corrected by adjusting the gain of those channel relative to the yellow channel. Though I guess you do the reverse of this, adjusting the gain of the blue and green channels relative to the red channel in the inverted image.

I'm still trying to wrap my mind around what this means when I take a density measurement in specific wavelengths of red, green and blue, because what I've observed is that if I balance out the RGB output of my light so that they're all equivalent, or if I take three exposures, linearize the gamma on all three, and then composite them in photoshop, is that I get an image that looks like the negative with the orange mask removed. The border is very close to a neutral grey, and inverting the image gives me what is very close to a linear gamma representation of the positive.

[edit], I've attached the diagram that helped crystallize how the color coupler dyes and channel crosstalk in color negative film work in hopes that it will help others.
 

Attachments

  • IMG_20200301_130313.png
    IMG_20200301_130313.png
    913 KB · Views: 214
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I'm still trying to wrap my mind around what this means when I take a density measurement in specific wavelengths of red, green and blue, because what I've observed is that if I balance out the RGB output of my light so that they're all equivalent, or if I take three exposures, linearize the gamma on all three, and then composite them in photoshop, is that I get an image that looks like the negative with the orange mask removed. The border is very close to a neutral grey, and inverting the image gives me what is very close to a linear gamma representation of the positive.

you’re a big chunk of the way there. If you’re doing that with samples that are already in a color managed environment (I.e. photoshop or LR), then you *can* get pretty acceptable results, but will be limited to the size of the color space you’re working in, and what manipulations the tooling gives you to work with. This is why I wrote some code to do my work. What it’s doing isn’t actually that technically sophisticated, and much of it is sample code from the internet that I’ve used as a reference when coding my own implementation, but tools like photoshop just don’t provide a simple way to perform said operations that you can batch against a collection of images that you just scanned.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
you’re a big chunk of the way there. If you’re doing that with samples that are already in a color managed environment (I.e. photoshop or LR), then you *can* get pretty acceptable results, but will be limited to the size of the color space you’re working in, and what manipulations the tooling gives you to work with. This is why I wrote some code to do my work. What it’s doing isn’t actually that technically sophisticated, and much of it is sample code from the internet that I’ve used as a reference when coding my own implementation, but tools like photoshop just don’t provide a simple way to perform said operations that you can batch against a collection of images that you just scanned.

That's good to hear! I feel like I'm on the verge of understanding how this all works. I found a few RAW libraries that offer Python wrappers (I usually prototype in Python before I jump into C if I can), and my sense is that all I really need is to be able to iterate over the RAW pixel data and read/write the RGB values to perform the color operations I need to do.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
That's good to hear! I feel like I'm on the verge of understanding how this all works. I found a few RAW libraries that offer Python wrappers (I usually prototype in Python before I jump into C if I can), and my sense is that all I really need is to be able to iterate over the RAW pixel data and read/write the RGB values to perform the color operations I need to do.

In simplified terms, that’s pretty much it, however if you do it directly on the raw data, you’ll need to do the added complexity of transforming the raw pixels so that they conform to a color space. For my implementation I chose ProPhoto, but you can technically use any space, assuming it’s large enough to contain everything.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
In simplified terms, that’s pretty much it, however if you do it directly on the raw data, you’ll need to do the added complexity of transforming the raw pixels so that they conform to a color space. For my implementation I chose ProPhoto, but you can technically use any space, assuming it’s large enough to contain everything.

Ah ok, that makes sense. For your method, you invert and linearize the color channels on the RAW data before debayering and before mapping to a color space.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Ah ok, that makes sense. For your method, you invert and linearize the color channels on the RAW data before debayering and before mapping to a color space.

I don’t debayer it. It’s: linearize and invert the raw samples, white balance middle grey, map it to a color space, write out the samples into a DNG as Bayered CFA data and appropriate meta data so the adobe camera raw/Lightroom does the right thing.

you can debayer it if you want, but then you’re writing a full RGB for each pixel, which makes the files a lot bigger.
 

removedacct2

Member
Joined
May 26, 2018
Messages
366
I've been reading the Giorgianni and Madden book about digital color management, specifically their appendix regarding the orange mask in color negative film.

I have not read that book but just a word about the orange mask and scanners: did you know that some people do scan colour negatives AS slides, which gives a slightly blue tinted positive. The idea is that in slide mode a scanner doesn't apply any correction but runs a full as-is scanning which may retain more colour richness after you remove the blue tint. I did scan like this sometime but I didn't notice much gain and found it not worth relatively to the time spend in correcting off the blue tint.

In practice I have been now testing quite a lot Negative Lab Pro vs ColorPerfect vs CNMY photoshop scripts+Frontier/Noritsu LUTs vs my default until now Gimp, with and without sampling of the orange mask, and in the end sampling is not only unnecessary but sometimes not simple because the mask isn't always even (exposure in difficult situations like againt the sun, strong lateral sun, flares, etc). That's why also lists of emulsions aren't so useful, besides, manufacturers modify their recipes over time without always documenting it.
 
Last edited:

Marameo

[...] Motion picture film scanners and drum scanners use the same strategy, except they use dichroic mirrors and filters to separate the RGB wavelengths rather than LEDs.

I have always wondered if Motion picture film scanners just raw scan or invert the image also. I assume this is about the DI (digital intermediate) stage.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
I have always wondered if Motion picture film scanners just raw scan or invert the image also. I assume this is about the DI (digital intermediate) stage.

The schematic of the motion picture scanner I studied ran the film on a reel, and illuminated each frame with a halogen light. The light that passed through the film was split through a prism, and then further filtered via dichroic mirrors so that you ended up with three channels of narrow-band samples of the image in red, green and blue. Those images were then exposed to a sensor, which unlike a digital camera, doesn't have a bayer filter, so just records a greyscale image of the filtered channel. Those greyscale images were then composites as channel masks in software to produce a scan of the negative.
 

removedacct2

Member
Joined
May 26, 2018
Messages
366
Look, I’m really asking a rather simple question at the outset, and might I ad, being quite clear about it.
I’m not looking for cheap especially.
Lr just doesn’t appear to be for me, and I don’t want a monthly sub on it. Not for ten dollars, if that is really even possible, or for more.
I’d gladly pay double the price of NLP for a good orange mask removal’n reversal stand alone program that does a good job.

I just hoped to be able to survey the landscape a little quicker with the help of you guys.

I don’t really see why you bring in the whole format discussion and whether 35mm is high enough resolution or not for this or that (it is :smile:.
I’m scanning any and all formats.

There are well documented reasons why people are migrating from scanners of various kinds to macro/DSLR setups. Quality being one of them.

I bring this up because I mentioned in the thread by end february 2019 me considering to buy Negative Lab Pro, which I did in march, so I have been using it a lot the last 9 months. Not exclusively, as I still use Gimp.
Before buying NLP I did try ColorPerfect and couple simplier procedures for Photoshop. In fact I was too hasty with ColorPerfect, didn't read carefully through the docs scattered around the website, it lacks a well structured tutorial in a single place, but I came back to it recently after I became fed up with NLP.

this relates to your question in all aspects: automation from negative to final positive, comparison for different solutions, licensing/pricing scheme, software footprint on the operating system
  • licensing/pricing: me too I am against the Adobe subscription mode. But in order to try NLP and ColorPerfect I had to run LR and PS anyway. Now the point is that I am Unix user since ever, so I had to play with Wine emulation on Linux and FreeBSD or run Adobe stuff in a Win10 running inside a Virtualbox instance. Even if we live in Unix we are always exposed to Windows stuff and users we have to communicate with, so over the years I have run many Win softwares at least one time just to see what they do. Which means either I grab a trial or just a repacked/cracked version. I had to try many LR versions and finally LR CC6 (2015) installs and runs in Wine in Linux and FreeBSD (there it requires a custom build of Wine not the stock one from the packages). As for Photoshop the CS6 is the one working. There are couple glitches but nothing serious and anyway the point is to just run the plugins, NLP and CP. If you keep things like this, you don't need a Windows license, just a LR6 or a PS one, if you want to be legal. Remember i speak from the amateur pov... That said I do buy the commercial software that I use, so the goal isn't to be illegal but to be practical and efficient. But now with the subscription scam of Adobe I don't know if it is possible to pay a one-time licenses for older versions.
  • footprint of the software on the operating system: of course it's totally silly to pull the overbloated Adobe machineries just in order to do negative to positive conversion ...
  • comparisons: besides Gimp, LR+NLP, PS+CP I have played with a PS plugin called CNMY, with the recent one called Grain2Pixel, with Filmlabapp for desktop
  • automation: NONE gives a full satisfactory final positive, sometimes it's okay but most of the time you need to check and tweak. It's worse if you want to invert mainly DSLR/mirrorless scans, because cropping may be needed unless your camera shooting ratios can be adapted
now, a real case illustration:

here a 3600dpi 48bit scan of a 6x6 picture i took the other day witha Bronica S2, film is Lomography-400, I scanned to DNG with the EpsonV700 as per Negative Lab Pro instructions, in order to process it with NLP too. For Gimp the DNG can be converted to TIFF with dcraw, or Gimp just calls Darktable/Rawtherapee :

https://yadi.sk/d/bN4R8tgk04wVWg (~350Mb)

in Gimp a batch mode is available and workflow goes like this:
- do a 16-bit linear scan in TIFF of the negative in Vuescan
- File > Batch image manipulation and set two procedures: gimp-drawable-invert and gimp-drawable-levels-stretch.
- load all the negatives you want and run the manipulation

from the linked negative the Gimp manipulation gives this:

raw0008-gimp_non_mod.jpg




now, I run the DNG through Negative Lab Pro and default settings give this:

raw0002-NLP-pos_non_mod.jpg




both rendering are WRONG:
Gimp shows a strong blue cast. When sky is cloudy snows gets a slight blueish tone but on the Gimp positive blue is strong on the wood and the stone. And it's too luminous, the shot is blown.

Negativ Lab Pro gets rid of the blue but
1) too much, some of the timber there is very old, and old timber with a worn out tar turns grey, so some slight blueish tones should be somewhere there
2) the red of bin to the right has become very dark
3) sky and snow are a bit blown
but first, it's just too damn bright. When I took that picture it was quite dark, not only a thick grey snow sky, but also it was end of the daylight, I metered the Lomography-400 at iso 800 and had to use slow speeds only. So the luminous white of the snow and the sky there are wrong and the dark brown of the tarred houses pops out too much.

this is common with NLP. Oh yes it produces "nice" pictures. Throw at it a dark foggy winter day in Helsingør when Hamlet is out on the castle walls, and it becomes a bright luminous day where the ghost of Hamlet's father will not wander. Shakespeare out of business with NLP...

now, this is the default non modified output of ColorPerfect. Luminosity is almost that, sky shows the gradients of greys in the clouds, snow has a realistic color for the given sky, footprints and paths are better seen in the snow, the tarred timber of the houses must be worked a bit for the brown and blueish, but it's much closer to where I was when I took the shot.
So in order to get a right positive I will work only a bit on the ColorPerfect rendering.

raw0008-CP_non_mod.jpg



similar wonky harsh colours and luminosity/overexposure often with NLP, so I went to read closely the docs of ColorPerfect which didn't convince me last year, and I understood I had just overlooked/missed a couple key points. When I went through the whole doc I got it and ColorPerfect became the way to go.

Still, i had to pull Photoshop in order to run CP. ... well in fact no, their doc have instructions for another graphics processor: Photoline. i didn't know it, but the guy have been developing it since the last Atari ST days in mid 90's and after that for Win and Mac. Now, Photoline is a one-time 59€ license (for many computers), major versions upgrade 29€, but then this is not needed for ColorPerfect. Photoline runs PS plugins written as per full Adobe API. Photoline win64 is a 32mb download! It is very compact and efficient, no windows bloated bells and whistles and it runs flawlessly in Wine emulation (Linux and FreeBSD).

in the end I found the efficient, light, snappy solution. Paid one-time 59€ for Photoline, 67$ for ColorPerfect.

btw, Photoline has some nice colour, quality tweaking and sharpening tools and also like Gimp a reasonable colours inversion function. In Photoline the default output of this negative is:

raw0008-PL_non_mod.jpg
 
Last edited:

Alain Deloc

Member
Joined
Sep 16, 2018
Messages
123
Location
Bucharest
Format
Multi Format
You can also try Grain2Pixel
I bring this up because I mentioned in the thread by end february 2019 me considering to buy Negative Lab Pro, which I did in march, so I have been using it a lot the last 9 months. Not exclusively, as I still use Gimp.
Before buying NLP I did try ColorPerfect and couple simplier procedures for Photoshop. In fact I was too hasty with ColorPerfect, didn't read carefully through the docs scattered around the website, it lacks a well structured tutorial in a single place, but I came back to it recently after I became fed up with NLP.

this relates to your question in all aspects: automation from negative to final positive, comparison for different solutions, licensing/pricing scheme, software footprint on the operating system
  • licensing/pricing: me too I am against the Adobe subscription mode. But in order to try NLP and ColorPerfect I had to run LR and PS anyway. Now the point is that I am Unix user since ever, so I had to play with Wine emulation on Linux and FreeBSD or run Adobe stuff in a Win10 running inside a Virtualbox instance. Even if we live in Unix we are always exposed to Windows stuff and users we have to communicate with, so over the years I have run many Win softwares at least one time just to see what they do. Which means either I grab a trial or just a repacked/cracked version. I had to try many LR versions and finally LR CC6 (2015) installs and runs in Wine in Linux and FreeBSD (there it requires a custom build of Wine not the stock one from the packages). As for Photoshop the CS6 is the one working. There are couple glitches but nothing serious and anyway the point is to just run the plugins, NLP and CP. If you keep things like this, you don't need a Windows license, just a LR6 or a PS one, if you want to be legal. Remember i speak from the amateur pov... That said I do buy the commercial software that I use, so the goal isn't to be illegal but to be practical and efficient. But now with the subscription scam of Adobe I don't know if it is possible to pay a one-time licenses for older versions.
  • footprint of the software on the operating system: of course it's totally silly to pull the overbloated Adobe machineries just in order to do negative to positive conversion ...
  • comparisons: besides Gimp, LR+NLP, PS+CP I have played with a PS plugin called CNMY, with the recent one called Grain2Pixel, with Filmlabapp for desktop
  • automation: NONE gives a full satisfactory final positive, sometimes it's okay but most of the time you need to check and tweak. It's worse if you want to invert mainly DSLR/mirrorless scans, because cropping may be needed unless your camera shooting ratios can be adapted
now, a real case illustration:

here a 3600dpi 48bit scan of a 6x6 picture i took the other day witha Bronica S2, film is Lomography-400, I scanned to DNG with the EpsonV700 as per Negative Lab Pro instructions, in order to process it with NLP too. For Gimp the DNG can be converted to TIFF with dcraw, or Gimp just calls Darktable/Rawtherapee :

https://yadi.sk/d/bN4R8tgk04wVWg (~350Mb)

in Gimp a batch mode is available and workflow goes like this:
- do a 16-bit linear scan in TIFF of the negative in Vuescan
- File > Batch image manipulation and set two procedures: gimp-drawable-invert and gimp-drawable-levels-stretch.
- load all the negatives you want and run the manipulation

from the linked negative the Gimp manipulation gives this:

View attachment 263699



now, I run the DNG through Negative Lab Pro and default settings give this:

View attachment 263700



both rendering are WRONG:
Gimp shows a strong blue cast. When sky is cloudy snows gets a slight blueish tone but on the Gimp positive blue is strong on the wood and the stone. And it's too luminous, the shot is blown.

Negativ Lab Pro gets rid of the blue but
1) too much, some of the timber there is very old, and old timber with a worn out tar turns grey, so some slight blueish tones should be somewhere there
2) the red of bin to the right has become very dark
3) sky and snow are a bit blown
but first, it's just too damn bright. When I took that picture it was quite dark, not only a thick grey snow sky, but also it was end of the daylight, I metered the Lomography-400 at iso 800 and had to use slow speeds only. So the luminous white of the snow and the sky there are wrong and the dark brown of the tarred houses pops out too much.

this is common with NLP. Oh yes it produces "nice" pictures. Throw at it a dark foggy winter day in Helsingør when Hamlet is out on the castle walls, and it becomes a bright luminous day where the ghost of Hamlet's father will not wander. Shakespeare out of business with NLP...

now, this is the default non modified output of ColorPerfect. Luminosity is almost that, sky shows the gradients of greys in the clouds, snow has a realistic color for the given sky, footprints and paths are better seen in the snow, the tarred timber of the houses must be worked a bit for the brown and blueish, but it's much closer to where I was when I took the shot.
So in order to get a right positive I will work only a bit on the ColorPerfect rendering.

View attachment 263701


similar wonky harsh colours and luminosity/overexposure often with NLP, so I went to read closely the docs of ColorPerfect which didn't convince me last year, and I understood I had just overlooked/missed a couple key points. When I went through the whole doc I got it and ColorPerfect became the way to go.

Still, i had to pull Photoshop in order to run CP. ... well in fact no, their doc have instructions for another graphics processor: Photoline. i didn't know it, but the guy have been developing it since the last Atari ST days in mid 90's and after that for Win and Mac. Now, Photoline is a one-time 59€ license (for many computers), major versions upgrade 29€, but then this is not needed for ColorPerfect. Photoline runs PS plugins written as per full Adobe API. Photoline win64 is a 32mb download! It is very compact and efficient, no windows bloated bells and whistles and it runs flawlessly in Wine emulation (Linux and FreeBSD).

in the end I found the efficient, light, snappy solution. Paid one-time 59€ for Photoline, 67$ for ColorPerfect.

btw, Photoline has some nice colour, quality tweaking and sharpening tools and also like Gimp a reasonable colours inversion function. In Photoline the default output of this negative is:

View attachment 263706

You will never get good inversions on frames like that. I am Grain2Pixel creator and I can tell you that there isn't a single frame algorithm capable to invert an image like you posted with amazing results. Simply put, because there isn't enough color information to make a good color balance. All color balance algorithms are relying on colors in order to set the gamma values on each channel and also the input and output levels/curves values on each channel. Without enough color, the color balance will be fooled and the results will start to be only an approximation.
One way of fixing that is to use a carefully crafted profile which will cover that combination of film + sensor + light. That implies you will shoot a frame of that stock with a color chart in 5600K, develop it in fresh C41, then scan, then manually create a set of adjustments on a calibrated screen to match the color chart with the reality. Then, your snowy cottage will invert just perfect. This procedure is a little difficult and it works only with films developed in identical conditions and you will have to craft profiles for most of the films you are using.

Another way is to use external color information within the same roll. Assuming that your roll contains various shots with enough color diversity, you can use other frames information to color balance the snowy scene. This is a new feature I implemented in Grain2Pixel and it's in beta stage. It's called Roll Analysis. It takes a little longer because it has to parse first the entire roll and grab color information from each frame but the results are much more richer and stable compared with single frame inversions. Once roll analysis done, it's reused for any frame from the roll. This method works only of the selected frames to be analyzed are having enough color information. If the roll consists in just 36 exposures of the same scene (and that can be a possible scenario) then I would recommend manual conversion.

I hope that helps,
Cheers!
 
Last edited:
Joined
Mar 3, 2011
Messages
1,510
Location
Maine!
Format
Medium Format
Well, he thinks $9.99/month is a scam vs $930 for a fixed license of both. $930 was the previous asking price approximately of both PS and Lr, which was good for 2-3yrs before an update. $9.99 per month is 7.75 years of usage before you’ve reached $930, again my conservative estimate of the boxed cost of both PS and LR. Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm

I’ve been an NLP user for a while, but I’ve recently been tipped off about Neg Master, looks promising as well. I’m going to try it when I have the time.
 

warden

Subscriber
Joined
Jul 21, 2009
Messages
2,989
Location
Philadelphia
Format
Medium Format
You can also try Grain2Pixel


You will never get good inversions on frames like that. I am Grain2Pixel creator and I can tell you that there isn't a single frame algorithm capable to invert an image like you posted with amazing results. Simply put, because there isn't enough color information to make a good color balance. All color balance algorithms are relying on colors in order to set the gamma values on each channel and also the input and output levels/curves values on each channel. Without enough color, the color balance will be fooled and the results will start to be only an approximation.
One way of fixing that is to use a carefully crafted profile which will cover that combination of film + sensor + light. That implies you will shoot a frame of that stock with a color chart in 5600K, develop it in fresh C41, then scan, then manually create a set of adjustments on a calibrated screen to match the color chart with the reality. Then, your snowy cottage will invert just perfect. This procedure is a little difficult and it works only with films developed in identical conditions and you will have to craft profiles for most of the films you are using.

Another way is to use external color information within the same roll. Assuming that your roll contains various shots with enough color diversity, you can use other frames information to color balance the snowy scene. This is a new feature I implemented in Grain2Pixel and it's in beta stage. It's called Roll Analysis. It takes a little longer because it has to parse first the entire roll and grab color information from each frame but the results are much more richer and stable compared with single frame inversions. Once roll analysis done, it's reused for any frame from the roll. This method works only of the selected frames to be analyzed are having enough color information. If the roll consists in just 36 exposures of the same scene (and that can be a possible scenario) then I would recommend manual conversion.

I hope that helps,
Cheers!

That was fascinating, thanks!
 

warden

Subscriber
Joined
Jul 21, 2009
Messages
2,989
Location
Philadelphia
Format
Medium Format
Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm
The subscription model makes a lot of sense for me for the reasons you have cited in this thread. Recently I gifted one of my teenage sons with a subscription to the full suite of Adobe products and he is absolutely loving them and learning so much. He is well and truly dug in.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
The gamma-per-channel part doesn't sound right. Somehow RA4 paper manages to do just fine without adjusting its contrast for each layer for every shot.

RA4 paper doesn't do any analysis or adjusting for contrast. Think about each step in the process as a color space transformation. The RA4 printing process is designed around the color spaces of the RA4 paper and color negative film. You're transforming from one set of CMY dyes to another set of CMY dyes. No digital sensor ever designed or built (to my knowledge) operates in a color space that is even close to the color spaces involved in RA4 printing. The primaries (CMY for printing, RGB for digital) are entirely dissimilar, and in order to get it even remote close you need to have reference points to know whether a given CMY coordinate in the negative color space corresponds to an RGB coordinate in the digital space. The less color data you have, the less accurate that transformation will be.

The specific problem you run into is that the orange mask in the film does not have an even density. It varies inversely with the amount of cyan and magenta dye in the film. What this means in practical terms is that, in order to get the colors correct in the inverted negative, if you white balance for the mid-tones, your shadows and highlights will have either too much or too little red and green. That's why you need a gamma curve per channel. To do this optimally in a digital space, you would need a stepped grey scale from black to white, which would allow you to take multiple white balance measurements. This is part of what NLP does, you can look at the curves it applies and literally see that it adds multiple points along the curve where it is calculating a white balance. Very few photos offer a perfect stepped grey scale, so the color analysis can get pretty tricky. I'd actually love to hear more about this from Alain if he'd be willing to comment on how it all works.

If you wanted to mimic what RA4 paper is doing, you would use the approach that most dedicated film scanners use: take three greyscale images with colored lights so that you can capture the information in one dye layer at a time, and composite them back together later. If you use that method, then adjusting for the color mask only requires a gain adjustment to the red and green channels, rather than a gamma adjustment.

Unrelated note, but NLP also has a mode that allows you to invert an entire roll based on the analysis of a single frame. Also, RawTherapee has added a negative inversion tool that I've heard good things about, though I haven't had a chance to try it because the current version has some terrible performance problems on Mac related to an issue in a common library it uses to draw the interface.
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
Well, he thinks $9.99/month is a scam vs $930 for a fixed license of both. $930 was the previous asking price approximately of both PS and Lr, which was good for 2-3yrs before an update. $9.99 per month is 7.75 years of usage before you’ve reached $930, again my conservative estimate of the boxed cost of both PS and LR. Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm
What he said.
 

removedacct2

Member
Joined
May 26, 2018
Messages
366
But NLP gives you choice between different rendering modes. Which one you're using here? You can select "Lab Soft" (similar to Colorperfect example here) or even Flat / Linear, which gives you the most neutral look which you can fine-tune later.

yes, but two points:

1) what are the assumptions for the renderings offered, upon what logic? Because inversion should be computed relatively to the colorometric intrinsics of the picture. Try to be hi-fi. Interpretations are more for BW negatives.
2) very often when the default LAB Standard differs much from reality, the variants are not much better. Not always, many times NLP render an ok positive, but many other times I found it wonky relatively to the default computed by ColorPerfect.

ColorPerfect doesn't offer a menu of possible white balances, undocumented pre-established renderings and scanners specifics (Noritsu, Frontier) like NLP, it produces ONE rendition based upon the film emulsion, and offers a bunch of colorometric parameters down to fine tuning by zones.

anyway, since you mention the other renderings of NLP, for that former negative posted earlier, here are, left to right NLP LABsoft, Linear and Flat. They still are not there in some of the snow and with the red bin to the right. "Flat" in this case would be indeed a good base for few tweaks. But CP (ColorPerfect) provides a better one when I have the image in higher resolution on the display.

Maihaugen-NLP-_LABsoft_Linear_Flat.jpg




I spend four/five months surrounded by snow, and november-february are short days like 11:00-14:30/ 10:00-16:00 and shorter the more north I spend a week-end. Also the sun then is low on the horizon, with very long twilight, it's not illuminating from the top over your head but from a "side", which means in case of blue clear sky, that it gets on your way easily. This can be a PITA or be taken advantage of.

I bought NLP end March, I ditch it now, not only after recent pictures, but also I did process bunch of pictures taken before I bought it, that I processed "by hand" in Gimp.

Here 3 scans in DNG format as per NLP requirements, Epson V700 @2600dpi 48bits, ~140mb each, film is Ektar-100:
https://yadi.sk/i/uBvUIQnfupdYnw
https://yadi.sk/i/cz4O4IQEklWi4g
https://yadi.sk/i/gTaL7ZLQLuLwNw

now the positives obtained with ColorPerfect set to Kodak Ektar-100 on the left, with NLP default (LAB standard) with "WB setting" to "Kodak", on the right.

1st negative:
the best match is on the left (CP), blue must be corrected, blue must be added to restaure the deep blue sky, but otherwise the illumination of the square, the dark golden reflection of the dome on the left, the light/shadow and darkned tone of the paint on the lower building are ok, BECAUSE I took the shot late evening (well it was ~14:00 but sun was getting down). It's all missing on the NLP positive on the right

Vologda_1.jpg



2nd negative:
see the evening light cast by the sun setting? It needs some correction, but that's what I was seeing when I took the shot. Again, to the right it's just wrong season, wrong exposure, wrong luminosity:

Vologda_2.jpg



3rd negative:
again a yellow/blue cyan/red balances must be corrected, the sky like in the previous shots is messed but the fix is trivial. You can see what it is: the sun is on the horizon back the kreml, casting illumination on the towers, it's getting dark. It's golden hour.
With NLP golden hour is gone

Vologda_3.jpg
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
I felt this "colorspace" talk sounded familiar, I think we had a discussion about this recently. Ran a search and found it. @pwadoc, if you are referring to that blog post, ignore it. The author doesn't know what he's talking about.

I know about that blog post because I wrote it :D. If I didn't convince you there then I'm guessing there's nothing else I can say that will clarify. I'm not sure I really follow your arguments. Do you think color spaces are a concept I just invented? I can assure you they are not. Anyway, if your method produces results that you're happy with I can't argue with that, but there is a reason why all of these negative inversion applications exist, and it's not because anyone is trying to sell you a bill of goods.
 
Last edited:

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format
yes, but two points:

1) what are the assumptions for the renderings offered, upon what logic? Because inversion should be computed relatively to the colorometric intrinsics of the picture. Try to be hi-fi. Interpretations are more for BW negatives.
2) very often when the default LAB Standard differs much from reality, the variants are not much better. Not always, many times NLP render an ok positive, but many other times I found it wonky relatively to the default computed by ColorPerfect.

I apologize if you've mentioned this before, but what light source are you using to scan these? Some of the issues you're experiencing look similar to some problems I encountered that ended up being related to the light I was using have gaps in R9 and R12 while still having a high overall CRI rating.
 

grat

Member
Joined
May 8, 2020
Messages
2,045
Location
Gainesville, FL
Format
Multi Format
Well, he thinks $9.99/month is a scam vs $930 for a fixed license of both. $930 was the previous asking price approximately of both PS and Lr, which was good for 2-3yrs before an update. $9.99 per month is 7.75 years of usage before you’ve reached $930, again my conservative estimate of the boxed cost of both PS and LR. Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm

I have a copy of photoshop CS2 from... a really long time ago (2005), that I paid around $150 for (discount through some organization or special, or... I don't remember).

I can still use it. I've got the zip file somewhere that I made from the install media, I've got the activation key, and the last time I tried (admittedly, some time ago), it ran just fine. I might even have a lightroom key from around the same time, because I was given a license (legally) for an early version of lightroom. I might have even been an early release user, I don't recall.

$930 is too much to pay for both, when I can get a comparable editing tool for $50 (current price on Affinity Photo), and a "free" copy of Darktable. Paying rental is a good deal (especially for Adobe, because they're not reliant and regular purchased for their revenue stream)-- right up until Adobe decides to change their licensing model again (they've done it multiple times already), and now I've got all these .PSD documents which might work in another piece of software-- or might not.

I've seen way too many software and hardware projects that relied on "Da Cloud" (or it's predecessor) be shut down and turn into useless bits of paper or hardware.

Rental is fine if you don't care about being able to access your work next month.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom