He certainly would agree with me on the fact that the final color = emulsion color * paper color * CMY filters. This certainly can be compressed down to colors are set during printing. I am now going to keep my lips tight about scanning, as I remember getting in trouble for that in the past
printing was very carefully standardized to be unchanging from one batch of work to the next.
I can assure you that any chain photographer who approached lighting, exposure and filtration with a "the lab can always fix it in post" mindset wouldn't have lasted long.
I am not 100% sure I am following,
I didn't know that. Another piece of the puzzle if you like. I do/will be digitizing color negs with a Sony A7R4/FE 90 macro lens using a 5600K high CRI (fwiw) light source. I have never made a camera profile but perhaps I should? More relevantly, if perhaps ACR/PS has a "Linear mode" a DCP is not required?You may be missing one more thing though: the gamma set by a DCP profile, if using a camera to digitize. Some RAW converters allow you to switch to Linear Mode - that's real 1.0 capture gamma across all channels. This allows you to see the film image exactly the same way sensor saw it without camera curves applied to it.
My understanding of Adrian's algorithm is that it's a fitting algorithm, i.e. he's effectively applying custom LUT to get true grey across multiple exposures, and it's easier to do on a working image which is linear and uses a huge color space, in other words it's an implementation detail that may not be relevant to you.
Really, what I was looking for was to understand the principles behind the method. So, for example correct gamma curves as a matter of principle, and maybe do so before inverting to avoid color casts as is advocated. Assuming this to be the case then can this be translated into Photoshop or perhaps by using a provided matlab script.
I can assure you that any chain photographer who approached lighting, exposure and filtration with a "the lab can always fix it in post" mindset wouldn't have lasted long.
As a colour printer, we actually "fired" one of the professional photographer customers we had been doing work for, because of his negatives (he was seeking to save money by developing them himself - crossovers that were unbelievable!).
I will agree that the "target window" presented by a fully in control lab provides for a reasonable range of adjustment.
But that range is really specific and well defined and quite narrow in an operation that needs to achieve high quality at high speed and with limited costs.
I got the impression that the several exposures were done by way of confirmation that he had got the profile right - no casts showing up irrespective of EV range. I sure could be wrong there but if a confirmation thing, then his algorithm then applies to subsequent single images with some degree of confidence. IOW at least potentially the principle behind the algorithm could be adapted and adopted.Well... But in Photoshop you have a single image open, meanwhile his algorithm needs to consume several shots (separate exposures) of grey cards to perform linearization and compute a film profile.
He built it for completely automatic batch scanning in a mini-lab environment, very different context. You would have to get creative to reproduce this for a Photoshop-based workflow, maybe use a target with multiple shades of grey to build a profile first in Matlab, and then apply it to a different image?
Besides, linearization is not the only way to match gamma between emulsion layers. The method you described above, i.e. simply matching R+B gamma towards G, will work just fine, at least it works well enough for me hehe. Film density response is not linear (look at characteristic curves, they're on logarithmic scale), so the linearization step is just an implementation detail of one particular algorithm, not a general requirement for color inversion.
Thanks, I hear ya.Also, consider color casts coming from imperfect development process (temp, time), expired film, digitization light source, camera's own color science, etc. In my experience those are much more random and annoying to deal with than simply matching emulsion layer gamma.
much better results for various reasons which can simply be resumed as "garbage in --> garbage out".
For color negatives, I always start in Lightroom, but that is necessary because I use the Negative Lab Pro plugin (NLP). If there is any significant amount of dust, I do use Photoshop's vastly superior clone/heal/dust tools, but otherwise, I often use Lightroom+NLP, exclusively for my color negatives.
Before getting the NLP plugin I did try some manual color conversions in Photoshop, with variable results - some conversions I was more-or-less happy with, others not. But I found it to be a frustrating process. In the long run, I decided the cost of buying NLP - and the time I spent learning how to use NLP - has probably paid off, for me. While NLP is rarely an instant process, I now spend far less time on each image, and I am happy with the results. Examples <here>
Of course for b&w and color slide film, the process is much more straightforward and I am happy to do the whole process in Photoshop, no NLP needed.
ACR doesn't. All of the bundled camera profiles that are installed with it add a tone reproduction curve to the linear RAW data. You can get closer to what a linear curve should look like by installing Adobe DNG Profile Editor, open a DNG from the camera you use (you can save out a DNG from ACR) and changing the tone curve from Camera RAW Default to Linear.perhaps ACR/PS has a "Linear mode"
I still remain curious about "multipliers". Can anyone actually explain what they are in the context of "apply gain/multipliers to each channel until the film base plus fog is the same exposure, which will render it as light grey to white"
ACR doesn't. All of the bundled camera profiles that are installed with it add a tone reproduction curve to the linear RAW data. You can get closer to what a linear curve should look like by installing Adobe DNG Profile Editor, open a DNG from the camera you use (you can save out a DNG from ACR) and changing the tone curve from Camera RAW Default to Linear.
The straight line here is Linear, the red line is ACR Default:
View attachment 305550
However, I don't think it's truly linear. Most digital cameras underexpose by a few stops to give more highlight headroom. What I mean is this: a digital sensor should max out at 100% reflectance which is 2 and a bit stops above middle grey but it doesn't. Adobe's "Linear" curve is not really a straight line. There is a highlight rolloff that allows an extra stop or two of information before clipping. What this means for negative inversion even when using the Adobe "Linear" profile within ACR is that depending on how the RAW file was exposed (including altering exposure in ACR) what will become the shadow areas of the image after inversion are losing contrast.
I think that if you can get your RAW file behaving in a truly linear fashion then a simple WB operation will achieve this. I have to say that after some brief tinkering with this method I nevertheless ran into the problem, already discussed here, of a grey chart bracketed sequence suffering from colour casts at different exposure levels. I compared the results to a Noritsu scan and the colour casts are different (less pronounced and a different colour on the Noritsu).
However, I think Adrian's method of completely neutralising a colour cast across the range is not for me. I might be wrong but I would expect a negative to produce an image with some degree of crossover.
So basically, the following method is very attractive for workflow reasons and gives acceptable results for some users, but I haven't found it to deal adequately with the problem of crossover:
- Get RAW file into linear space (ACR/LR can't do this adequately)
- WB to remove orange mask and colour balance
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.is that saying that totally eliminating color casts prevents a film 'look' having its 'look'
I have both NLP and NegMaster. NLP runs in LR and is much faster and easier to run a batch file on, while I slightly prefer the results in Negmaster which is slower and runs in PS.
I'll make 2 files, and develop the first with NLP. I think of that as my contact sheet, allowing me to see which ones I want to spend the time on in NegMaster.
I downloaded a trial version of this. It seems to work ok. It is very basic.
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.
Imagine a ColorChecker. If we somehow neutralise the grey patches on a scan, perhaps using an RGB curve, the colour patches will retain the character thats inherent to that film as realised by the scanning and post-processing method. From my understanding, Adrian took this one step further, calibrating the colour patches to their colorimetric values in reality, giving (in his words) an ACR digital rendering. Now that is removing a huge component of the film look in my view. I know that he then averaged the calibrations for many film stocks to create a "digital paper" for use on all films and that some level of variance remains. But that is its own concoction and perhaps it's still adding a "digital" look.
It's very hard to know what the objective reality of a negative should be, in terms of colour. I read with interest another thread here where someone was asking why the RGB curves on a data sheet don't line up (implying that a colour cast is inherent in the medium) whereas others appeared to be stating that colour casts are removed during the printing process. Well, last week I made a colour balanced (for middle grey) RA4 print of a greyscale step wedge shot in 5000K on Fuji Crystal and Kodak Endura paper and I can see that the prints have colour casts in the shadows and highlights, and a different one for each paper
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.
Imagine a ColorChecker. If we somehow neutralise the grey patches on a scan, perhaps using an RGB curve, the colour patches will retain the character thats inherent to that film as realised by the scanning and post-processing method. From my understanding, Adrian took this one step further, calibrating the colour patches to their colorimetric values in reality, giving (in his words) an ACR digital rendering. Now that is removing a huge component of the film look in my view. I know that he then averaged the calibrations for many film stocks to create a "digital paper" for use on all films and that some level of variance remains. But that is its own concoction and perhaps it's still adding a "digital" look.
It's very hard to know what the objective reality of a negative should be, in terms of colour. I read with interest another thread here where someone was asking why the RGB curves on a data sheet don't line up (implying that a colour cast is inherent in the medium) whereas others appeared to be stating that colour casts are removed during the printing process. Well, last week I made a colour balanced (for middle grey) RA4 print of a greyscale step wedge shot in 5000K on Fuji Crystal and Kodak Endura paper and I can see that the prints have colour casts in the shadows and highlights, and a different one for each paper
Not intending to throw shade on @LolaColor. Using this post as an example.
Reading this thread, it's easy to see someone navel-gazing or pixel-peeping, unless you are a professional commercial or portrait photographer.
I agree and one side of that coin is that you don't want any distortions and the other is that you don't want any "improvements".if I had chosen a film look, I would want it reproduced from the recording (the film)
I am about to embark on some thousand plus, and old color Negative digitizations. I guess what has motivated this thread is exploring what might be offered as best practice. I would hate to get to over 1000 and discover there is a much better way to do it! I want to avoid unwanted distortions. I concede that much of this will end up being 'academic' in nature but it is still fun to learn.
If you are using a phone, and your phone is an older model without RAW support, then you are limited to 256 jpg colors. That alone makes this a non-starter for me, but I'm sure they sell a lot of copies.
So, that would mean both right exposure and white balance, correct?For each reproduction method my aim is to make patch F16 on the chart middle grey.
View attachment 305614
conversions were done in PS on images exported from Darktable.
I turned off white balance correction in Darktable........then exported as linear ProPhoto files and were gamma adjusted in each channel (Levels) in PhotoShop to balance F16 for colour and brightness.
In Darktable I basically turned everything off and the most important thing to turn off was White Balance (actually deactivate the white balance module), the remaining settings looked like this:
View attachment 305615
In Photoshop my workflow was this:
Levels adjustment layer to balance red and blue layers to value of green layer for patch F16
Invert adjustment layer (everything will seem too bright due to the linear gamma)
Levels adjustment layer to bring down the brightness of F16 to middle grey
It is a good thing that you think it is "fun to learn" - because, in the process of scannning 1000 negatives, it is unavoidable that your concept of what makes an excellent scan will evolve. Expect hardware and software to evolve, as well. By the time you get to 1000, there WILL be a better way to do it! I have rescanned some of my slides a third or forth time to take advantage of evolving technology and my improved ability to use it.
I would recommend that you not scan all 1000 negatives in a short period of time. If you can "test" your scanning method by using your first 100 scans for end-use projects, then you will get feedback that will help improve your scanning process going forward. By 'end-use projects' I mean, do what you would normally do with digital images - make some prints, make a photo book, upload online galleries, make digital slideshows - whatever interests you. Only by working with your files will you discover what needs to be improved.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?