Per Bjesse
Member
Hi all,
I recently had an interesting discussion about film development for scanning that provoked a lot of thoughts for me, and I would love to hear more input on the matter. Here goes: Assume you are developing bw film for optimal scanning. Standard development settings aim to get a normal SBR scene to around 2.0 maximum density. This is slightly less than 7 stops, so not even 8 bits of data. Now, modern scanners can easily deal with a DMAX of 3.0 or more, so at least 10 stops. So: What is a good target density for highlights for optimum 16 bit grayscale scanning?
One could argue to that the best approach is to use the full density range the scanner can read. However: This will mean increased grain, and there is also the possibility of more noise from the scanner in the densest regions. So where should one aim?
Second thought: If you develop for scanning, what is the value of compensating developers like extreme minimal agitation pyro (like Steve Sherman) or two bath developers? All that does is to distort the response curve to compress the highlights. Given the the scanner has plenty of headroom in terms of maxium headroom, it seems like this distortion will not be needed, and if it is desired it can still be imposed in post.
Third thought: If you accept that compensating effects from developers become relatively unimportant when developing for scanning, it seems like the only developer effects you care about really is acutance and fine grain vs coarse grain. And if you scan large format film, for reasonable size viewing, then really all that matter in terms of developer effects is acutance. Is there something I am missing in terms of developer effects?
I would love to get some input on this.
Regards,
-Per
I recently had an interesting discussion about film development for scanning that provoked a lot of thoughts for me, and I would love to hear more input on the matter. Here goes: Assume you are developing bw film for optimal scanning. Standard development settings aim to get a normal SBR scene to around 2.0 maximum density. This is slightly less than 7 stops, so not even 8 bits of data. Now, modern scanners can easily deal with a DMAX of 3.0 or more, so at least 10 stops. So: What is a good target density for highlights for optimum 16 bit grayscale scanning?
One could argue to that the best approach is to use the full density range the scanner can read. However: This will mean increased grain, and there is also the possibility of more noise from the scanner in the densest regions. So where should one aim?
Second thought: If you develop for scanning, what is the value of compensating developers like extreme minimal agitation pyro (like Steve Sherman) or two bath developers? All that does is to distort the response curve to compress the highlights. Given the the scanner has plenty of headroom in terms of maxium headroom, it seems like this distortion will not be needed, and if it is desired it can still be imposed in post.
Third thought: If you accept that compensating effects from developers become relatively unimportant when developing for scanning, it seems like the only developer effects you care about really is acutance and fine grain vs coarse grain. And if you scan large format film, for reasonable size viewing, then really all that matter in terms of developer effects is acutance. Is there something I am missing in terms of developer effects?
I would love to get some input on this.
Regards,
-Per
