- Joined
- Oct 11, 2006
- Messages
- 2,190
- Format
- Multi Format
Alan It would be helpful if you could list which films require 16 bit? For example, I use Tmax 100, Tmax 400, Velvia 50 and Provia 100. These are very fine relatively grain-free. I normally scan 16 bit. I use Lightroom in post. What are your recommendations?
No that's OK. I'll just keep scanning at 16 bit and saving as a tiff since who knows what film and when banding might show up. Wouldn't the size of the print also play an effect? Another issue is the type of post processing program. While Photoshop keeps 16 bit, Elements does not. Also, once you store in jpeg, you reduce to 8 bit (I think). I suppose, as long as memory isn't an issue or processing speed, it might be better to scan at 16 bit and save in tiff.I can't give a specific recommendation. I did make some theoretical calculations, based on reported granularity of TMAX-100, and within the reasonable density range it looks like 8 bit scanning will probably work, but it might be a bit "iffy". The best thing to do would be actual testing, which I have not done.
There are several approaches one could use for testing. One would be to create a target with a smooth gradient. Scan it with 8 bit and 16 bit scans. Then see if you can see banding in the result. You could then do some extreme photoshop manipulations on those images to see if banding or other artifacts show up on the 8 bit scans that are not present in the 16 bit scans. Be sure to convert any image to 16 bit before doing any manipulations.
Another would be to shoot a very smooth target that has no gradient. Shoot it at several exposure levels. Be sure to defocus the lens to make sure there is no detail showing up in the smooth target image. Scan the image using 8 bit scanning and look at the histogram of distribution of intensity levels in very small segments near the center of the image. If there is more than one level in the histograms of those small segments it means that there is some noise in the image (from grain and from other sources), and that the noise is comparable to or greater than the ADC step size. In that case 8 bit scanning is probably sufficient. It is important to look at small segments, and not large segments or the whole image. I can explain why if you want, but I won't do the explanation in this post.
Generally there be one of the channels that gives the sharpest image. The preferred channel might depend on the brand of scanner. As I recall, for my canon FS4000us scanner it is the green channel.
Let me refine this concept just a bit more. If you scan in 8 bit then whenever you start doing digital image processing you should always convert the image to 16 bit and keep it at 16 bit all the way through the process, including any intermediate saves and the final save. This is to eliminate issues due to roundoff error.
Lightroom doesn’t scan. You could get more details only during scanning.Sorry. I got confused with statements others have made to scan in color and then use the green portion for better sharpness. I was trying to understand how to do that with Lightroom.
Yes, there is nothing wrong with your strategy.No that's OK. I'll just keep scanning at 16 bit and saving as a tiff since who knows what film and when banding might show up. Wouldn't the size of the print also play an effect? Another issue is the type of post processing program. While Photoshop keeps 16 bit, Elements does not. Also, once you store in jpeg, you reduce to 8 bit (I think). I suppose, as long as memory isn't an issue or processing speed, it might be better to scan at 16 bit and save in tiff.
Yes, that is one way to do it.more elementary questions:
first, I want to make sure I understand this suggestion: would this mean scanning in 24 bit rgb, saving from the scanning software as a tiff, and then converting to 48 bit in LR or PS or what have you, and then removing the two unwanted color channels? I’m pretty certain I’m missing something basic here, so apologies.
Alan It would be helpful if you could list which films require 16 bit?
There is nothing wrong with scanning in 16 bits, but if the grain or other sources of noise in the system are greater than or roughly equal to the step size of an 8 bit ADC then you won't get banding or loss of shadow detail if you use an 8 bit digitizer. It is basically equivalent to "dithering" a technique that is well known in the digital signal processing world. It is a method to eliminate artifacts due to ADC step size. Banding in an image is an example of an artifact arising from ADC step size.Alan, we have to scan and edit 16bits/channel always, and saving with file a format that conserves the 16/bits, this is TIFF. If not you may suffer banding in the edition, and you may loss shadow detail.
The 16 bits/channel is more important when film records an ample dynamic range, not mattering what kind of film.
but if the grain or other sources of noise in the system are greater than or roughly equal to the step size of an 8 bit ADC then you won't get banding or loss of shadow detail if you use an 8 bit digitizer.
There is nothing wrong with scanning in 16 bits, but if the grain or other sources of noise in the system are greater than or roughly equal to the step size of an 8 bit ADC then you won't get banding or loss of shadow detail if you use an 8 bit digitizer. It is basically equivalent to "dithering" a technique that is well known in the digital signal processing world. It is a method to eliminate artifacts due to ADC step size. Banding in an image is an example of an artifact arising from ADC step size.
Dithering often involves artificially adding noise to the signal. In those applications you make a trade-off. You accept a little bit of extra noise in exchange for eliminating ADC artifacts. However, if the signal is already noisy (e.g. grain in the negative, or electronic noise in the photodiodes in the scanner's sensors) then you don't even have to make the trade-off because you don't have to artificially add any noise.
I had a whole thread on this where I explained it in detail. I also posted some images obtained from another source that demonstrated the principle.
Let me emphasize that there is nothing mysterious or magic about this. It is entirely based on well established and well-known principles in the digital signal processing world.
Please also note that in my earlier post I also said that if you are doing image manipulation then you should always convert an 8 bit image to a 16 bit image before doing the manipulations.
I said nothing about 8 bit work flow. I only dealt with 8 bit scanning. To meet the protocol I specified the image needs to be converted to 16 bit mode before anything else is done, and it must stay in 16 bit mode for all subsequent operations.In general, 8bits is uncacceptable in a Pro edition.
Anyway it depends on the dynamic range in the film, the curve edition you require, and how careful you are in the edition. A moderate curve edition in 8 bits may not have visible harmful effects, but with 8 bits problems come easy.
> if you have to pull deep shadows then you may have problems, those deep shadows can have good detail, but at 8 bits you only have 256 levels and those shadows may be encoded in 8 gray levels (specially in a high dynamic range negative), and as you pull the shadows you loss the intermediate levels. Say you locally pull some shadows from 8 level to 64 level, after the expansion you will have only the levels 0, 8, 16, 24... with nothing in the middle...
> in lightroom, with 8 bit if you cascade several curve editions with expansions-compresions you easily get a degradation, if you use non destructive adjustment layers in Photoshop you don't have that problem, but you have to operate with layers.
> You may have to cascade several different sharpening operations of different radius, if working 16 bits you get an accurate job, with 8 bits those operations can be very destructive, in the deep shadows specially.
We may use 8 bit workflow for a moderate correction in a not much important image, but for a proficient edition we have to operate an oversampled image at 16 bits. Today's computers allow that, we have many GB in the RAM and M.2 disks are x30 faster than magnetic disks, also a 30,000 passmark cpu is $200 only. In the past it was very slow to operate with big images so many sacrifices had to be allowed and wisdom was required to get the optimal point, today this has changed.
I only dealt with 8 bit scanning.
The trivial one is that it takes half the storage space to store an 8 bit scan compared to a 16 bit scan. .... image needs to be converted to 16 bit mode before anything else is done
certain hardware/software scanner systems only allow 8 bit scans.
....
Now for a related thought. Have any of you ever scanned a negative (any negative) using a high dpi scanner in 8 bit mode, and then converted it to 16 bit mode, keeping it in 16 bit mode from then on, and then been able to produce banding by manipulating the image?
When I convert this photo to 16 bits/channel first I still see the banding. I can only suppress the banding...
Just prevent banding by scanning/editing 16 bits, with levels placed properly.
Perhaps there is also a factor which is editor dependent.
So what are your conclusions?@alanrockwood, after a night of sleep I'm starting to see what you explained about banding in your previous posts. I did some supplementary simulations, with 8 and 16 bit images, conversion from 8 to 16 bit, and levels of noise and now I get logical results that are reproducible. So, my initial posts #41,43 were a bit naive and I appreciate your patience to explain it all to us noobs.
This is very interesting. But this represents quite a bit of work. I don't understand why this is even an issue? Why not always scan 16 bit? Storage nowadays is cheap, so just scan at 16 and save the files as uncompressed TIFF. Am I missing something?I know that some won't believe what I am saying. If anyone would like to prove me wrong and would like to do a little experiment, here's what you can do.
Find or create a negative of an object with a smooth gradient. Scan the negative twice with a high resolution film scanner, like a 4000 dpi Nikon or Canon film scanner. For one scan do it in 8 bit mode, saving as a tiff. For the other scan do it in 16 bit mode and save it as a tiff.
Read the two negatives into photoshop, or whatever your favorite image processing application is. Before you do anything else, convert the 8 bit image to 16 bits. From now on keep that image in 16 bit format, including any saves. From now on you will be treating the two images in exactly the same way.
Now do some image manipulation on the two images, the one that was converted to 16 bits and the one that was created in 16 bits. Do the manipulation using the histogram correction method. That way you can do the exact same manipulations to the two images.
Are you able to produce banding in one but not the other image? Can you even see a difference between the two manipulated images by visual inspection?
Try it with Tri-X film, then FP-4 plus, then Tmax 100.
My prediction is that you won't be able to produce banding with Tri-X or FP-4 plus, and probably not with Tmax 100.
Something like Velvia might give a different result, but Velvia isn't a negative film.
Now for a related thought. Have any of you ever scanned a negative (any negative) using a high dpi scanner in 8 bit mode, and then converted it to 16 bit mode, keeping it in 16 bit mode from then on, and then been able to produce banding by manipulating the image?
So what are your conclusions?
This is very interesting. But this represents quite a bit of work. I don't understand why this is even an issue? Why not always scan 16 bit? Storage nowadays is cheap, so just scan at 16 and save the files as uncompressed TIFF. Am I missing something?
Thanks. I guess that means no conclusion.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?