Color Negative Manual Conversion Advice

Takatoriyama

D
Takatoriyama

  • 4
  • 1
  • 61
Tree and reflection

H
Tree and reflection

  • 2
  • 0
  • 59
CK341

A
CK341

  • 3
  • 0
  • 68
Plum, Sun, Shade.jpeg

A
Plum, Sun, Shade.jpeg

  • sly
  • May 8, 2025
  • 3
  • 0
  • 96
Windfall 1.jpeg

A
Windfall 1.jpeg

  • sly
  • May 8, 2025
  • 7
  • 0
  • 78

Recent Classifieds

Forum statistics

Threads
197,623
Messages
2,762,076
Members
99,423
Latest member
southbaybrian
Recent bookmarks
0

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
52,025
Location
Delta, BC Canada
Format
Medium Format
He certainly would agree with me on the fact that the final color = emulsion color * paper color * CMY filters. This certainly can be compressed down to colors are set during printing. I am now going to keep my lips tight about scanning, as I remember getting in trouble for that in the past :smile:

Well you are not going to get in trouble talking about scanning in this part of Photrio.
And in controlled shooting environments in those days, the colour was controlled at the shooting end with light and filtration - printing was very carefully standardized to be unchanging from one batch of work to the next.
There are still some who work that way, even if it is now a rarity.
 

McDiesel

Member
Joined
Mar 24, 2022
Messages
322
Location
USA
Format
Analog
printing was very carefully standardized to be unchanging from one batch of work to the next.

I found his posts where he discussed it, the standartization was done for specific films (Portras). Which again does not contradict what I said: you have to tune several variables to get the print you want, and the tuning happens after shooting [1]

In another old thread about scanning, Adrian demonstrated his approach of making all films look like Portra 400, which again aligns nicely with treating color negative film only as a starting foundation to create any look you want. Basically, a flexible "analog RAW", open for tweaking.

[1] Of course you're right about light & filters, but those are always available for all films and all mediums.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
52,025
Location
Delta, BC Canada
Format
Medium Format
I can assure you that any chain photographer who approached lighting, exposure and filtration with a "the lab can always fix it in post" mindset wouldn't have lasted long.
As a colour printer, we actually "fired" one of the professional photographer customers we had been doing work for, because of his negatives (he was seeking to save money by developing them himself - crossovers that were unbelievable!).
I will agree that the "target window" presented by a fully in control lab provides for a reasonable range of adjustment.
But that range is really specific and well defined and quite narrow in an operation that needs to achieve high quality at high speed and with limited costs.
 

McDiesel

Member
Joined
Mar 24, 2022
Messages
322
Location
USA
Format
Analog
I can assure you that any chain photographer who approached lighting, exposure and filtration with a "the lab can always fix it in post" mindset wouldn't have lasted long.

Oh I see. Well of course, that's not the right mindset. I now see where you're coming from.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
I am not 100% sure I am following,

I feel 100% that way but I suspect my 100% is greater than your 100% lol :laugh:

You may be missing one more thing though: the gamma set by a DCP profile, if using a camera to digitize. Some RAW converters allow you to switch to Linear Mode - that's real 1.0 capture gamma across all channels. This allows you to see the film image exactly the same way sensor saw it without camera curves applied to it.
I didn't know that. Another piece of the puzzle if you like. I do/will be digitizing color negs with a Sony A7R4/FE 90 macro lens using a 5600K high CRI (fwiw) light source. I have never made a camera profile but perhaps I should? More relevantly, if perhaps ACR/PS has a "Linear mode" a DCP is not required?

All that said, if understanding, there still exists the basic problem that color neg gamma channels are not equal, they're are supposed to be, but seldom are (I am told). That gamma channel mismatch still needs to be corrected.

My understanding of Adrian's algorithm is that it's a fitting algorithm, i.e. he's effectively applying custom LUT to get true grey across multiple exposures, and it's easier to do on a working image which is linear and uses a huge color space, in other words it's an implementation detail that may not be relevant to you.

I cannot follow completely as I dont fully understand his methodology but I think that's probably a great way of putting it -> some of his implementation may not be relevant if it relates to his method and only to his method. Really, what I was looking for was to understand the principles behind the method. So, for example correct gamma curves as a matter of principle, and maybe do so before inverting to avoid color casts as is advocated. Assuming this to be the case then can this be translated into Photoshop or perhaps by using a provided matlab script. However, some of the steps he and others take may possibly only relate to the implementation of a particular method.
 

McDiesel

Member
Joined
Mar 24, 2022
Messages
322
Location
USA
Format
Analog
Really, what I was looking for was to understand the principles behind the method. So, for example correct gamma curves as a matter of principle, and maybe do so before inverting to avoid color casts as is advocated. Assuming this to be the case then can this be translated into Photoshop or perhaps by using a provided matlab script.

Well... But in Photoshop you have a single image open, meanwhile his algorithm needs to consume several shots (separate exposures) of grey cards to perform linearization and compute a film profile. He built it for completely automatic batch scanning in a mini-lab environment, very different context. You would have to get creative to reproduce this for a Photoshop-based workflow, maybe use a target with multiple shades of grey to build a profile first in Matlab, and then apply it to a different image?

Besides, linearization is not the only way to match gamma between emulsion layers. The method you described above, i.e. simply matching R+B gamma towards G, will work just fine, at least it works well enough for me hehe. Film density response is not linear (look at characteristic curves, they're on logarithmic scale), so the linearization step is just an implementation detail of one particular algorithm, not a general requirement for color inversion.

Also, consider color casts coming from imperfect development process (temp, time), expired film, digitization light source, camera's own color science, etc. In my experience those are much more random and annoying to deal with than simply matching emulsion layer gamma.
 

flavio81

Member
Joined
Oct 24, 2014
Messages
5,059
Location
Lima, Peru
Format
Medium Format
I can assure you that any chain photographer who approached lighting, exposure and filtration with a "the lab can always fix it in post" mindset wouldn't have lasted long.
As a colour printer, we actually "fired" one of the professional photographer customers we had been doing work for, because of his negatives (he was seeking to save money by developing them himself - crossovers that were unbelievable!).
I will agree that the "target window" presented by a fully in control lab provides for a reasonable range of adjustment.
But that range is really specific and well defined and quite narrow in an operation that needs to achieve high quality at high speed and with limited costs.

Agree... Back in the "good old days of film" (for me, the early 2000s), when I worked as a concert photographer, often finding tungsten lighting and such, there were some people that said that I shouldn't bother with color correcting filters, that everything can be corrected when printing.

Well, I found out that the difference between doing that versus fitting a real 80 A/B/C color correcting filter was like night and day, using the filter gives much better results for various reasons which can simply be resumed as "garbage in --> garbage out".

Thus i learned to use the blue filters, and also to add a color correcting filter to my flash to match the stage lighting colors.

Start with a good capture on film and, only then, everything else has a chance to be right.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
Well... But in Photoshop you have a single image open, meanwhile his algorithm needs to consume several shots (separate exposures) of grey cards to perform linearization and compute a film profile.
I got the impression that the several exposures were done by way of confirmation that he had got the profile right - no casts showing up irrespective of EV range. I sure could be wrong there but if a confirmation thing, then his algorithm then applies to subsequent single images with some degree of confidence. IOW at least potentially the principle behind the algorithm could be adapted and adopted.

He built it for completely automatic batch scanning in a mini-lab environment, very different context. You would have to get creative to reproduce this for a Photoshop-based workflow, maybe use a target with multiple shades of grey to build a profile first in Matlab, and then apply it to a different image?

Yes I see that. All too difficult. Without really understanding what was involved, again, my reasoning was to get to the principles that might apply elsewhere....and if someone had already indeed done this, I could simply copy the workflow. In theory, I think nothing wrong with this but as is becoming increasingly apparent, the theory probably does not apply here...put another way, if it disagrees with experiment, it is wrong. Thats okay.
Besides, linearization is not the only way to match gamma between emulsion layers. The method you described above, i.e. simply matching R+B gamma towards G, will work just fine, at least it works well enough for me hehe. Film density response is not linear (look at characteristic curves, they're on logarithmic scale), so the linearization step is just an implementation detail of one particular algorithm, not a general requirement for color inversion.

My idea was to access (if possible ) a method that might provide a mathematical gamma correction of RGB channels potentially by clicking or using some tool.This would obviate the need to 'eyeball' it and potentially be more efficient, faster and more (mathematically) 'accurate'. It would be a starting point not necessarily replacing subjective evaluation. I do also fully appreciate this raises other interesting points - what is accuracy, what is fidelity, as compared to what reference and to who's reality (or perception of it) or even preference/ artistic intent.
Also, consider color casts coming from imperfect development process (temp, time), expired film, digitization light source, camera's own color science, etc. In my experience those are much more random and annoying to deal with than simply matching emulsion layer gamma.
Thanks, I hear ya.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
much better results for various reasons which can simply be resumed as "garbage in --> garbage out".

I think GIGO is a universal law applying to most situations. In audio/music (and I suspect photography) such things as High Resolution is a good thing to the extent it makes a perceptible difference but if the source is garbage you end up with highly resolved garbage.Not really what you want!

"To the extent that it is makes a perceptible difference" opens a whole different can of worms
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
For color negatives, I always start in Lightroom, but that is necessary because I use the Negative Lab Pro plugin (NLP). If there is any significant amount of dust, I do use Photoshop's vastly superior clone/heal/dust tools, but otherwise, I often use Lightroom+NLP, exclusively for my color negatives.

Before getting the NLP plugin I did try some manual color conversions in Photoshop, with variable results - some conversions I was more-or-less happy with, others not. But I found it to be a frustrating process. In the long run, I decided the cost of buying NLP - and the time I spent learning how to use NLP - has probably paid off, for me. While NLP is rarely an instant process, I now spend far less time on each image, and I am happy with the results. Examples <here>

Of course for b&w and color slide film, the process is much more straightforward and I am happy to do the whole process in Photoshop, no NLP needed.

Thanks.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
I still remain curious about "multipliers". Can anyone actually explain what they are in the context of "apply gain/multipliers to each channel until the film base plus fog is the same exposure, which will render it as light grey to white". I have speculated a little about this already but it would be nice to actually undesrtand it .
 

LolaColor

Member
Joined
Dec 1, 2018
Messages
43
Location
Ireland
Format
35mm
perhaps ACR/PS has a "Linear mode"
ACR doesn't. All of the bundled camera profiles that are installed with it add a tone reproduction curve to the linear RAW data. You can get closer to what a linear curve should look like by installing Adobe DNG Profile Editor, open a DNG from the camera you use (you can save out a DNG from ACR) and changing the tone curve from Camera RAW Default to Linear.

The straight line here is Linear, the red line is ACR Default:
06.jpg


However, I don't think it's truly linear. Most digital cameras underexpose by a few stops to give more highlight headroom. What I mean is this: a digital sensor should max out at 100% reflectance which is 2 and a bit stops above middle grey but it doesn't. Adobe's "Linear" curve is not really a straight line. There is a highlight rolloff that allows an extra stop or two of information before clipping. What this means for negative inversion even when using the Adobe "Linear" profile within ACR is that depending on how the RAW file was exposed (including altering exposure in ACR) what will become the shadow areas of the image after inversion are losing contrast.

I still remain curious about "multipliers". Can anyone actually explain what they are in the context of "apply gain/multipliers to each channel until the film base plus fog is the same exposure, which will render it as light grey to white"

I think that if you can get your RAW file behaving in a truly linear fashion then a simple WB operation will achieve this. I have to say that after some brief tinkering with this method I nevertheless ran into the problem, already discussed here, of a grey chart bracketed sequence suffering from colour casts at different exposure levels. I compared the results to a Noritsu scan and the colour casts are different (less pronounced and a different colour on the Noritsu).

However, I think Adrian's method of completely neutralising a colour cast across the range is not for me. I might be wrong but I would expect a negative to produce an image with some degree of crossover. So basically, the following method is very attractive for workflow reasons and gives acceptable results for some users, but I haven't found it to deal adequately with the problem of crossover:

- Get RAW file into linear space (ACR/LR can't do this adequately)
- WB to remove orange mask and colour balance
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
ACR doesn't. All of the bundled camera profiles that are installed with it add a tone reproduction curve to the linear RAW data. You can get closer to what a linear curve should look like by installing Adobe DNG Profile Editor, open a DNG from the camera you use (you can save out a DNG from ACR) and changing the tone curve from Camera RAW Default to Linear.

The straight line here is Linear, the red line is ACR Default:
View attachment 305550

Okay thanks, good to know
However, I don't think it's truly linear. Most digital cameras underexpose by a few stops to give more highlight headroom. What I mean is this: a digital sensor should max out at 100% reflectance which is 2 and a bit stops above middle grey but it doesn't. Adobe's "Linear" curve is not really a straight line. There is a highlight rolloff that allows an extra stop or two of information before clipping. What this means for negative inversion even when using the Adobe "Linear" profile within ACR is that depending on how the RAW file was exposed (including altering exposure in ACR) what will become the shadow areas of the image after inversion are losing contrast.

It sounds then that this consequential impact on the shadow areas is 'baked into' the equation.

The automated packages, or at least dedicated software doing the conversion might presumably compensate for this in some way? If so, it would add to any argument in favor of using a 'neutral' conversion from a package like Negmaster as your starting point for further manual adjustments rather than starting from scratch. Negmaster seems to have something of a reputation for neutrality as compared to others offering a more finished look that might 'pop', right out of the can.

I think that if you can get your RAW file behaving in a truly linear fashion then a simple WB operation will achieve this. I have to say that after some brief tinkering with this method I nevertheless ran into the problem, already discussed here, of a grey chart bracketed sequence suffering from colour casts at different exposure levels. I compared the results to a Noritsu scan and the colour casts are different (less pronounced and a different colour on the Noritsu).

However, I think Adrian's method of completely neutralising a colour cast across the range is not for me. I might be wrong but I would expect a negative to produce an image with some degree of crossover.

I am probably not interpreting this correctly but is that saying that totally eliminating color casts prevents a film 'look' having its 'look'? Some may (or may not) want that look. An analogy might be for audio/tube lovers wanting the warm 'distortion' that certain harmonics bring to the sound quality.

So basically, the following method is very attractive for workflow reasons and gives acceptable results for some users, but I haven't found it to deal adequately with the problem of crossover:

- Get RAW file into linear space (ACR/LR can't do this adequately)
- WB to remove orange mask and colour balance
 
Last edited:

bags27

Member
Joined
Jul 5, 2020
Messages
555
Location
USA
Format
Medium Format
I have both NLP and NegMaster. NLP runs in LR and is much faster and easier to run a batch file on, while I slightly prefer the results in Negmaster which is slower and runs in PS.

I'll make 2 files, and develop the first with NLP. I think of that as my contact sheet, allowing me to see which ones I want to spend the time on in NegMaster.
 

LolaColor

Member
Joined
Dec 1, 2018
Messages
43
Location
Ireland
Format
35mm
is that saying that totally eliminating color casts prevents a film 'look' having its 'look'
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.

Imagine a ColorChecker. If we somehow neutralise the grey patches on a scan, perhaps using an RGB curve, the colour patches will retain the character thats inherent to that film as realised by the scanning and post-processing method. From my understanding, Adrian took this one step further, calibrating the colour patches to their colorimetric values in reality, giving (in his words) an ACR digital rendering. Now that is removing a huge component of the film look in my view. I know that he then averaged the calibrations for many film stocks to create a "digital paper" for use on all films and that some level of variance remains. But that is its own concoction and perhaps it's still adding a "digital" look.

It's very hard to know what the objective reality of a negative should be, in terms of colour. I read with interest another thread here where someone was asking why the RGB curves on a data sheet don't line up (implying that a colour cast is inherent in the medium) whereas others appeared to be stating that colour casts are removed during the printing process. Well, last week I made a colour balanced (for middle grey) RA4 print of a greyscale step wedge shot in 5000K on Fuji Crystal and Kodak Endura paper and I can see that the prints have colour casts in the shadows and highlights, and a different one for each paper :smile:
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
I have both NLP and NegMaster. NLP runs in LR and is much faster and easier to run a batch file on, while I slightly prefer the results in Negmaster which is slower and runs in PS.

I'll make 2 files, and develop the first with NLP. I think of that as my contact sheet, allowing me to see which ones I want to spend the time on in NegMaster.

What is it about NegMaster results that you prefer compared with NLP?

I guess I hadn't paid enough attention to the NegMaster website. I didn't realize that is runs only in Photoshop
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.

Imagine a ColorChecker. If we somehow neutralise the grey patches on a scan, perhaps using an RGB curve, the colour patches will retain the character thats inherent to that film as realised by the scanning and post-processing method. From my understanding, Adrian took this one step further, calibrating the colour patches to their colorimetric values in reality, giving (in his words) an ACR digital rendering. Now that is removing a huge component of the film look in my view. I know that he then averaged the calibrations for many film stocks to create a "digital paper" for use on all films and that some level of variance remains. But that is its own concoction and perhaps it's still adding a "digital" look.

It's very hard to know what the objective reality of a negative should be, in terms of colour. I read with interest another thread here where someone was asking why the RGB curves on a data sheet don't line up (implying that a colour cast is inherent in the medium) whereas others appeared to be stating that colour casts are removed during the printing process. Well, last week I made a colour balanced (for middle grey) RA4 print of a greyscale step wedge shot in 5000K on Fuji Crystal and Kodak Endura paper and I can see that the prints have colour casts in the shadows and highlights, and a different one for each paper :smile:

Not intending to throw shade on @LolaColor . Using this post as an example.

Reading this thread, it's easy to see someone navel-gazing or pixel-peeping, unless you are a professional commercial or portrait photographer.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
It's a very interesting question and I would say that totally neutralising colour casts at different exposure levels does not prevent a film from having its look as the hue, saturation and lightness of individual colours will vary from film to film, and with exposure for a given film.

Imagine a ColorChecker. If we somehow neutralise the grey patches on a scan, perhaps using an RGB curve, the colour patches will retain the character thats inherent to that film as realised by the scanning and post-processing method. From my understanding, Adrian took this one step further, calibrating the colour patches to their colorimetric values in reality, giving (in his words) an ACR digital rendering. Now that is removing a huge component of the film look in my view. I know that he then averaged the calibrations for many film stocks to create a "digital paper" for use on all films and that some level of variance remains. But that is its own concoction and perhaps it's still adding a "digital" look.

It's very hard to know what the objective reality of a negative should be, in terms of colour. I read with interest another thread here where someone was asking why the RGB curves on a data sheet don't line up (implying that a colour cast is inherent in the medium) whereas others appeared to be stating that colour casts are removed during the printing process. Well, last week I made a colour balanced (for middle grey) RA4 print of a greyscale step wedge shot in 5000K on Fuji Crystal and Kodak Endura paper and I can see that the prints have colour casts in the shadows and highlights, and a different one for each paper :smile:

Not intending to throw shade on @LolaColor. Using this post as an example.

Reading this thread, it's easy to see someone navel-gazing or pixel-peeping, unless you are a professional commercial or portrait photographer.

@LolaColor Okay and thanks. I agree it is interesting how one may want the rendering of a reproduction. As touched on previously in audio circles people talk a lot about "transparency", like a clean window onto something. Using the word "something" is intentionally vague. Some describe that the audio equipment 'gets out of the way' of the music, there is no stamp or signature or characteristic 'sound'...including no "HiFi" sound (almost an insult these days). Some might say life-like, as in the instruments/artists are "in the room with you", so "palpable"...and infamously, "more there , there". Others will say even better, "like you are in the room/venue with the musician". So what is Fidelity anyway. A 'reproduction' system that reproduces the recording or some reference to reality.

Arguably, a HiFi system can only reproduce what it is fed on the recording. That is its reality. Anything else is distortion, notwithstanding if the recording is not true to life. Its an interesting philosophical point. Perhaps just my long winded way of saying my preference in audio is that the recording faithfully captures reality and the reproduction system is true to the recording.

While I like my audio (music) to sound 'real' I am less interested in literal representations of images or paintings. In photography I am the 'artist', a bit like the musician who can interpret the music any way they want. To that end, if I had chosen a film look, I would want it reproduced from the recording (the film).

@PhilBurton I suppose you are right in that some might start pixel peeping in looking for distortions like color casts in shadows or whatever. Then again, some say mp3 audio files sound just the same as 24bit/ 96 Hz audio files. Well not to my ears anyway. Would I pick up slight color cast in the shadows of an image? Almost certainly not.

I am about to embark on some thousand plus, and old color Negative digitizations. I guess what has motivated this thread is exploring what might be offered as best practice. I would hate to get to over 1000 and discover there is a much better way to do it! I want to avoid unwanted distortions. I concede that much of this will end up being 'academic' in nature but it is still fun to learn.
 
Last edited:

LolaColor

Member
Joined
Dec 1, 2018
Messages
43
Location
Ireland
Format
35mm
if I had chosen a film look, I would want it reproduced from the recording (the film)
I agree and one side of that coin is that you don't want any distortions and the other is that you don't want any "improvements".

Now, I've only become interested in DSLR scanning very recently as I was hanging out at a lab doing some darkroom prints and asked them to scan a frame for me. But it's a frame that's going to give me a lot of information as it's of a colour chart and I can cross reference it other things like a Noritsu scan of the same frame and an RA4 print. For each reproduction method my aim is to make patch F16 on the chart middle grey. Any method I've tried so far for the DSLR scan has given me a colour cast that seems funky and inaccurate. It's some kind of variation of this where the highlights are too blue and the shadows too warm:


07.jpg

Also look at how in column 15 the yellows go wonky. This evening I've been tinkering and I may have found a method that gives results that look right to me. In this conversion there is still a colour cast but it's pretty close to what I've observed from the Noritsu scan. And the overall colour reproduction looks right to me:
08.jpg

The top image is the kind of image I often get when using ACR, but in fact both of these conversions were done in PS on images exported from Darktable. In the top image I white balanced on F16 and in the bottom image I turned off white balance correction in Darktable. Both images were then exported as linear ProPhoto files and were gamma adjusted in each channel (Levels) in PhotoShop to balance F16 for colour and brightness.

In Darktable I basically turned everything off and the most important thing to turn off was White Balance (actually deactivate the white balance module), the remaining settings looked like this:

09.jpg


In Photoshop my workflow was this:

Levels adjustment layer to balance red and blue layers to value of green layer for patch F16
Invert adjustment layer (everything will seem too bright due to the linear gamma)
Levels adjustment layer to bring down the brightness of F16 to middle grey

If I do the same thing with the file that was manually white balanced on patch F16 in the RAW converter I get the wonky colours and colour cast.
If I do the same thing with the file that was left on default "as shot" white balance in the RAW converter I get what looks like the correct colours and a different, stronger colour cast.
If I do the same thing with a file that was exported to regular ProPhoto (1.8 gamma) but otherwise settings as above then the colours look OK but the overall contrast is too washed out. I would then have to make an adjustment to increase the contrast which would make the colours too saturated.

So this looks like a workflow that delivers very good results. I guess it's just time consuming and if I were doing hundreds of frames like you plan to do I would totally use a faster solution.
 
Last edited:

runswithsizzers

Subscriber
Joined
Jan 19, 2019
Messages
1,674
Location
SW Missouri, USA
Format
35mm
I am about to embark on some thousand plus, and old color Negative digitizations. I guess what has motivated this thread is exploring what might be offered as best practice. I would hate to get to over 1000 and discover there is a much better way to do it! I want to avoid unwanted distortions. I concede that much of this will end up being 'academic' in nature but it is still fun to learn.

It is a good thing that you think it is "fun to learn" - because, in the process of scannning 1000 negatives, it is unavoidable that your concept of what makes an excellent scan will evolve. Expect hardware and software to evolve, as well. By the time you get to 1000, there WILL be a better way to do it! I have rescanned some of my slides a third or forth time to take advantage of evolving technology and my improved ability to use it.

I would recommend that you not scan all 1000 negatives in a short period of time. If you can "test" your scanning method by using your first 100 scans for end-use projects, then you will get feedback that will help improve your scanning process going forward. By 'end-use projects' I mean, do what you would normally do with digital images - make some prints, make a photo book, upload online galleries, make digital slideshows - whatever interests you. Only by working with your files will you discover what needs to be improved.
 

markjwyatt

Subscriber
Joined
Apr 26, 2018
Messages
2,414
Location
Southern California
Format
Multi Format
If you are using a phone, and your phone is an older model without RAW support, then you are limited to 256 jpg colors. That alone makes this a non-starter for me, but I'm sure they sell a lot of copies.

I have it on a PC, and it supports 16 bit Tiff. It is by no means a sophisticated color tool. I would likely finish the images in ON1 or Gimp.
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
Okay thanks for that!
For each reproduction method my aim is to make patch F16 on the chart middle grey.
So, that would mean both right exposure and white balance, correct?

View attachment 305614
conversions were done in PS on images exported from Darktable.

I guess the variable here is Darktable. Why is it used? I presume the method would work if doing fully manually?)

I turned off white balance correction in Darktable........then exported as linear ProPhoto files and were gamma adjusted in each channel (Levels) in PhotoShop to balance F16 for colour and brightness.

In Darktable I basically turned everything off and the most important thing to turn off was White Balance (actually deactivate the white balance module), the remaining settings looked like this:

View attachment 305615

In Photoshop my workflow was this:

Levels adjustment layer to balance red and blue layers to value of green layer for patch F16

So, you alter gamma for RGB channels in the levels panel to adjust the brightness and color using the F16 patch? Then another levels layer to adjust the red and blue to match green using F16 patch? If possible could you explain the steps here and how?

Invert adjustment layer (everything will seem too bright due to the linear gamma)

I understand this at least :D
Levels adjustment layer to bring down the brightness of F16 to middle grey

Is this additional to above?

Sorry for not understanding this better :cry:
 
OP
OP
Lewipix

Lewipix

Member
Joined
May 9, 2022
Messages
30
Location
Australia
Format
35mm
It is a good thing that you think it is "fun to learn" - because, in the process of scannning 1000 negatives, it is unavoidable that your concept of what makes an excellent scan will evolve. Expect hardware and software to evolve, as well. By the time you get to 1000, there WILL be a better way to do it! I have rescanned some of my slides a third or forth time to take advantage of evolving technology and my improved ability to use it.

I would recommend that you not scan all 1000 negatives in a short period of time. If you can "test" your scanning method by using your first 100 scans for end-use projects, then you will get feedback that will help improve your scanning process going forward. By 'end-use projects' I mean, do what you would normally do with digital images - make some prints, make a photo book, upload online galleries, make digital slideshows - whatever interests you. Only by working with your files will you discover what needs to be improved.

good advice! thanks
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom