Whilst I'm definitely not a fan of the subscription model for software, at the same time I recognise that paying 10 quid a month for PS + Lightroom really isn't that big a deal. At those rates you are getting 5-6 years of use for the same one-off price it would have cost you to outright buy PS and LR separately.
I don't post much (or really at all) on this forum, but I've been absorbing a lot from the discussions on here, and I just wanted to pop in to say: Adrian, it would be awesome if you shared a basic implementation of your process github. Or really just some of the technical implementation details, for other programmers (like myself) to work from. My sense is that a lot of the value of the work you've done is not in the application code, but rather than data you've collection on the various emulsions. That would take someone else a lot of time and effort to duplicate.
I also have another, unrelated question about something I've discovered as I've been taking apart some (broken) scanners. Every high-end scanner I've been able to poke around in (Coolscan, Noritsu, Frontier) uses RGB LEDs in very specific wavelengths. Motion picture film scanners and drum scanners use the same strategy, except they use dichroic mirrors and filters to separate the RGB wavelengths rather than LEDs. I've read some speculation that this is related to the spectral sensitivity of RA4 paper, and is also a strategy for completely omitting the orange mask in the resulting scan, since the spectrums the scanners are capturing completely omit orange wavelengths. From what I've seen, they all use a blue at around 440-460, green at around 525-540, and a red from 640-660. I've been playing around with some LED arrays in those wavelengths, and the results are... interesting. I wonder if this is an avenue worth exploring. Maybe instead of using software to correct a colorimetric approach to DSLR scanning, we can save time by using a densitometric approach, which is what I understand the RGB sampling method to be.
thank you, I’ve been considering it.
re: scanner LEDs, that’s actually more to do with conforming it to a color space than anything else. Whether the orange mask is there or not has more to do with the strength of each LED relative to each other. Using specific known wavelengths as the light source just makes it easier to do the raw scanner sample RGB to XYZ transform so that it can then be transformed to its destination color space (I.e. sRGB if jpeg). XYZ is the universal connection space that everything gets converted to before being transformed to a specific smaller color space. Even DSLRs have a raw rgb to XYZ matrix that is tuned to the specific CFA characteristics of that particular camera. The DNG spec lists it all out and how it’s used, and if you convert a raw file to DNG, you can use exiftool to inspect the entire contents of all the metadata (including said matrix) in the file. It’s very enlightening.
I'm still trying to wrap my mind around what this means when I take a density measurement in specific wavelengths of red, green and blue, because what I've observed is that if I balance out the RGB output of my light so that they're all equivalent, or if I take three exposures, linearize the gamma on all three, and then composite them in photoshop, is that I get an image that looks like the negative with the orange mask removed. The border is very close to a neutral grey, and inverting the image gives me what is very close to a linear gamma representation of the positive.
you’re a big chunk of the way there. If you’re doing that with samples that are already in a color managed environment (I.e. photoshop or LR), then you *can* get pretty acceptable results, but will be limited to the size of the color space you’re working in, and what manipulations the tooling gives you to work with. This is why I wrote some code to do my work. What it’s doing isn’t actually that technically sophisticated, and much of it is sample code from the internet that I’ve used as a reference when coding my own implementation, but tools like photoshop just don’t provide a simple way to perform said operations that you can batch against a collection of images that you just scanned.
That's good to hear! I feel like I'm on the verge of understanding how this all works. I found a few RAW libraries that offer Python wrappers (I usually prototype in Python before I jump into C if I can), and my sense is that all I really need is to be able to iterate over the RAW pixel data and read/write the RGB values to perform the color operations I need to do.
In simplified terms, that’s pretty much it, however if you do it directly on the raw data, you’ll need to do the added complexity of transforming the raw pixels so that they conform to a color space. For my implementation I chose ProPhoto, but you can technically use any space, assuming it’s large enough to contain everything.
Ah ok, that makes sense. For your method, you invert and linearize the color channels on the RAW data before debayering and before mapping to a color space.
I've been reading the Giorgianni and Madden book about digital color management, specifically their appendix regarding the orange mask in color negative film.
[...] Motion picture film scanners and drum scanners use the same strategy, except they use dichroic mirrors and filters to separate the RGB wavelengths rather than LEDs.
I have always wondered if Motion picture film scanners just raw scan or invert the image also. I assume this is about the DI (digital intermediate) stage.
Look, I’m really asking a rather simple question at the outset, and might I ad, being quite clear about it.
I’m not looking for cheap especially.
Lr just doesn’t appear to be for me, and I don’t want a monthly sub on it. Not for ten dollars, if that is really even possible, or for more.
I’d gladly pay double the price of NLP for a good orange mask removal’n reversal stand alone program that does a good job.
I just hoped to be able to survey the landscape a little quicker with the help of you guys.
I don’t really see why you bring in the whole format discussion and whether 35mm is high enough resolution or not for this or that (it is.
I’m scanning any and all formats.
There are well documented reasons why people are migrating from scanners of various kinds to macro/DSLR setups. Quality being one of them.
I bring this up because I mentioned in the thread by end february 2019 me considering to buy Negative Lab Pro, which I did in march, so I have been using it a lot the last 9 months. Not exclusively, as I still use Gimp.
Before buying NLP I did try ColorPerfect and couple simplier procedures for Photoshop. In fact I was too hasty with ColorPerfect, didn't read carefully through the docs scattered around the website, it lacks a well structured tutorial in a single place, but I came back to it recently after I became fed up with NLP.
this relates to your question in all aspects: automation from negative to final positive, comparison for different solutions, licensing/pricing scheme, software footprint on the operating system
now, a real case illustration:
- licensing/pricing: me too I am against the Adobe subscription mode. But in order to try NLP and ColorPerfect I had to run LR and PS anyway. Now the point is that I am Unix user since ever, so I had to play with Wine emulation on Linux and FreeBSD or run Adobe stuff in a Win10 running inside a Virtualbox instance. Even if we live in Unix we are always exposed to Windows stuff and users we have to communicate with, so over the years I have run many Win softwares at least one time just to see what they do. Which means either I grab a trial or just a repacked/cracked version. I had to try many LR versions and finally LR CC6 (2015) installs and runs in Wine in Linux and FreeBSD (there it requires a custom build of Wine not the stock one from the packages). As for Photoshop the CS6 is the one working. There are couple glitches but nothing serious and anyway the point is to just run the plugins, NLP and CP. If you keep things like this, you don't need a Windows license, just a LR6 or a PS one, if you want to be legal. Remember i speak from the amateur pov... That said I do buy the commercial software that I use, so the goal isn't to be illegal but to be practical and efficient. But now with the subscription scam of Adobe I don't know if it is possible to pay a one-time licenses for older versions.
- footprint of the software on the operating system: of course it's totally silly to pull the overbloated Adobe machineries just in order to do negative to positive conversion ...
- comparisons: besides Gimp, LR+NLP, PS+CP I have played with a PS plugin called CNMY, with the recent one called Grain2Pixel, with Filmlabapp for desktop
- automation: NONE gives a full satisfactory final positive, sometimes it's okay but most of the time you need to check and tweak. It's worse if you want to invert mainly DSLR/mirrorless scans, because cropping may be needed unless your camera shooting ratios can be adapted
here a 3600dpi 48bit scan of a 6x6 picture i took the other day witha Bronica S2, film is Lomography-400, I scanned to DNG with the EpsonV700 as per Negative Lab Pro instructions, in order to process it with NLP too. For Gimp the DNG can be converted to TIFF with dcraw, or Gimp just calls Darktable/Rawtherapee :
https://yadi.sk/d/bN4R8tgk04wVWg (~350Mb)
in Gimp a batch mode is available and workflow goes like this:
- do a 16-bit linear scan in TIFF of the negative in Vuescan
- File > Batch image manipulation and set two procedures: gimp-drawable-invert and gimp-drawable-levels-stretch.
- load all the negatives you want and run the manipulation
from the linked negative the Gimp manipulation gives this:
View attachment 263699
now, I run the DNG through Negative Lab Pro and default settings give this:
View attachment 263700
both rendering are WRONG:
Gimp shows a strong blue cast. When sky is cloudy snows gets a slight blueish tone but on the Gimp positive blue is strong on the wood and the stone. And it's too luminous, the shot is blown.
Negativ Lab Pro gets rid of the blue but
1) too much, some of the timber there is very old, and old timber with a worn out tar turns grey, so some slight blueish tones should be somewhere there
2) the red of bin to the right has become very dark
3) sky and snow are a bit blown
but first, it's just too damn bright. When I took that picture it was quite dark, not only a thick grey snow sky, but also it was end of the daylight, I metered the Lomography-400 at iso 800 and had to use slow speeds only. So the luminous white of the snow and the sky there are wrong and the dark brown of the tarred houses pops out too much.
this is common with NLP. Oh yes it produces "nice" pictures. Throw at it a dark foggy winter day in Helsingør when Hamlet is out on the castle walls, and it becomes a bright luminous day where the ghost of Hamlet's father will not wander. Shakespeare out of business with NLP...
now, this is the default non modified output of ColorPerfect. Luminosity is almost that, sky shows the gradients of greys in the clouds, snow has a realistic color for the given sky, footprints and paths are better seen in the snow, the tarred timber of the houses must be worked a bit for the brown and blueish, but it's much closer to where I was when I took the shot.
So in order to get a right positive I will work only a bit on the ColorPerfect rendering.
View attachment 263701
similar wonky harsh colours and luminosity/overexposure often with NLP, so I went to read closely the docs of ColorPerfect which didn't convince me last year, and I understood I had just overlooked/missed a couple key points. When I went through the whole doc I got it and ColorPerfect became the way to go.
Still, i had to pull Photoshop in order to run CP. ... well in fact no, their doc have instructions for another graphics processor: Photoline. i didn't know it, but the guy have been developing it since the last Atari ST days in mid 90's and after that for Win and Mac. Now, Photoline is a one-time 59€ license (for many computers), major versions upgrade 29€, but then this is not needed for ColorPerfect. Photoline runs PS plugins written as per full Adobe API. Photoline win64 is a 32mb download! It is very compact and efficient, no windows bloated bells and whistles and it runs flawlessly in Wine emulation (Linux and FreeBSD).
in the end I found the efficient, light, snappy solution. Paid one-time 59€ for Photoline, 67$ for ColorPerfect.
btw, Photoline has some nice colour, quality tweaking and sharpening tools and also like Gimp a reasonable colours inversion function. In Photoline the default output of this negative is:
View attachment 263706
You can also try Grain2Pixel
You will never get good inversions on frames like that. I am Grain2Pixel creator and I can tell you that there isn't a single frame algorithm capable to invert an image like you posted with amazing results. Simply put, because there isn't enough color information to make a good color balance. All color balance algorithms are relying on colors in order to set the gamma values on each channel and also the input and output levels/curves values on each channel. Without enough color, the color balance will be fooled and the results will start to be only an approximation.
One way of fixing that is to use a carefully crafted profile which will cover that combination of film + sensor + light. That implies you will shoot a frame of that stock with a color chart in 5600K, develop it in fresh C41, then scan, then manually create a set of adjustments on a calibrated screen to match the color chart with the reality. Then, your snowy cottage will invert just perfect. This procedure is a little difficult and it works only with films developed in identical conditions and you will have to craft profiles for most of the films you are using.
Another way is to use external color information within the same roll. Assuming that your roll contains various shots with enough color diversity, you can use other frames information to color balance the snowy scene. This is a new feature I implemented in Grain2Pixel and it's in beta stage. It's called Roll Analysis. It takes a little longer because it has to parse first the entire roll and grab color information from each frame but the results are much more richer and stable compared with single frame inversions. Once roll analysis done, it's reused for any frame from the roll. This method works only of the selected frames to be analyzed are having enough color information. If the roll consists in just 36 exposures of the same scene (and that can be a possible scenario) then I would recommend manual conversion.
I hope that helps,
Cheers!
The subscription model makes a lot of sense for me for the reasons you have cited in this thread. Recently I gifted one of my teenage sons with a subscription to the full suite of Adobe products and he is absolutely loving them and learning so much. He is well and truly dug in.Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm
The gamma-per-channel part doesn't sound right. Somehow RA4 paper manages to do just fine without adjusting its contrast for each layer for every shot.
What he said.Well, he thinks $9.99/month is a scam vs $930 for a fixed license of both. $930 was the previous asking price approximately of both PS and Lr, which was good for 2-3yrs before an update. $9.99 per month is 7.75 years of usage before you’ve reached $930, again my conservative estimate of the boxed cost of both PS and LR. Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm
But NLP gives you choice between different rendering modes. Which one you're using here? You can select "Lab Soft" (similar to Colorperfect example here) or even Flat / Linear, which gives you the most neutral look which you can fine-tune later.
yes, but two points:
1) what are the assumptions for the renderings offered, upon what logic? Because inversion should be computed relatively to the colorometric intrinsics of the picture. Try to be hi-fi. Interpretations are more for BW negatives.
2) very often when the default LAB Standard differs much from reality, the variants are not much better. Not always, many times NLP render an ok positive, but many other times I found it wonky relatively to the default computed by ColorPerfect.
Well, he thinks $9.99/month is a scam vs $930 for a fixed license of both. $930 was the previous asking price approximately of both PS and Lr, which was good for 2-3yrs before an update. $9.99 per month is 7.75 years of usage before you’ve reached $930, again my conservative estimate of the boxed cost of both PS and LR. Adobe ain’t no saint but they did vastly expand the base of people who could legally use their products, vastly increase their revenue (this is a goal of for profit companies), and eliminate piracy, which was widespread at the time, over night. While it’s true that that lots of things have shifted to subscription these days, the Adobe move was good for consumers, good for Adobe, and actually honest about what a software license really is. What a scam! /sarcasm
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?