jlbruyelle
Member
When considering these things it is important to distinguish large-area vs. pixel-scale signal variations. It is the pixel-by-pixel noise that is important for this discussion. For example, it is entirely possible to cover a high dynamic range (as measured over a large image area) with a low-bit ADC, provided that the variation in density over a small region is large. If you scan a featureless area of a negative, such as a featureless sky, you will find that there is quite a bit of pixel-to-pixel variation in the signal. This can be caused by what is in the film itself (such as grain) or the sensor itself (for example, thermal noise in the electronics), or at extremely low light levels one can even be subjected to shot noise. If the the standard deviation of the pixel-to-pixel noise is comparable to the ADC step size then the benefit in going to a smaller ADC step size is negligible.
All this reasoning is pointless unless you quantify the "quite a bit", which can be large or negligible. Please don't make blanket statements. Your last statement, in particular, does not mean that the useful quantification limit is not higher than 8 bits.
In fact, if the pixels are small enough (then under certain conditions) a one bit ADC is sufficient to recover all of the information in the image.
This is a very hypothetical case that never happens in real life, unless you are scanning with a microscope (and even then...). What we call "grain" is not composed of the individual silver particles but of agglomerated particles, so they are not black/white but grey-level, and the noise level of this grain can be very low.
All this is pretty well known among the signal processing community. I am not a member of that community
I am
A bit of a wild card may occur if the negative is so dense that almost no light comes through. In that case the variation in pixel-to-pixel grain in the negative itself could be less than the ADC step size, and in that case you could lose information.
Sorry, but there is no reason whatsoever why it would only happen in the highs that the film noise is lower than the quantification step. You are making hypotheses on the granularity that are not necessarily true.
However, consumer-grade scanners use photodiode detectors, and I am pretty sure that there is enough electronic noise in the detectors that the noise level is greater than the ADC step size. (I am prepared to be proven wrong on this by actual experimental results.) If the noise in the detector and associated electronics is comparable to the ADC step size then there is no advantage in going to an ADC with a finer step size.
That last sentence is correct. It is the reason why you only need to know the maximum OD (= Optical Density) of your scanner, since the latter is what takes into account the total noise generated by the sensor plus the electronics, no matter the technology.
A bit of theory may help clarify this point. Let's say that the maximum OD of your scanner is 3.0. This means that it is able to reproduce a contrast range of 1000:1, which we can translate as the noise background being 1/1000 of the maximum value. It is thus easy to understand that two quantification steps separated by 1/500 would be more separated than the noise, hence a loss of useful information. 1/1000 is what you want, and 10 bits will bring you this. This is why the maximum OD and the quantification bits are closely related. Note that 16 bits is too many since the step difference is lost in noise, but there is no such thing as a 10 bits files, and there is no real inconvenience to recording more bits than necessary - in fact, the least significant bits are usually stuffed with zeros since the ADC doesn't send them.
Please remember that the question here is whether or not 8 bits quantification is sufficient to retain all the information from the scanned film. No hypothesis has been made on the film granularity - and we all know that it can be very low, depending on the film. So reasoning on the effects of film granularity in this conversation is questionable. At most, we can say that the overall noise can be higher than the scanner due to film grain, and 8 bits can be enough on very grainy films.
As an interesting point, drum scanners generally use photomultiplier tubes (PMT) for detection. The noise level in most PMTs is EXTREMELY low. For example, a Hammamatsu H11870-01 has a typical dark count rate of 15 photons/second. This is probably a lot lower than the dark count rate of PMT used in most drum scanners, but the point is that the dark count rate is extremely low in those detectors, and in such cases one needs to take the analysis to another level.
Same thing as above: all these considerations boil down to the highest OD that your scanner (or whatever sensor you use) is able to handle, which you can read in the scanner's spec sheet or measure yourself with a grey scale. No need to count the photons at the sensor, it doesn't tell the whole story anyway. Incidentally I very much doubt that a H11870-01 was ever used in a drum scanner, it is intended for different applications - unless you have a reference to provide?
Now please let's come back to the topic at hand, which is whether you get a better image with 16 bits instead of 8. The answer to this question is yes, as long as the maximum density of your scanner and document is higher than 256:1, i.e. OD = 2.4.
For example the Epson V600, an entry-level flatbed scanner, claims 3.4 (2500:1) so it would call for 16 bits. OTOH Ilford Multigrade RC paper barely exceeds 2.0 (100:1) so 8 bits are probably enough, even if scanned on a V600 since these documents will never reach a density that would require more than 8 bits.
As a matter of reference, the dynamic range of the human eye is about 1000:1, so the 16 bits / 8 bits difference is noticeable at least to a viewer who pays attention. Of course there are always exceptions if the original image has a max OD inferior to 2.4, is only composed of uniform areas, or is so noisy that the RMS noise would exceed the 8-bit quantification step (possible with pushed Tri-X, but not with Kodachrome for instance). However this is the general rule, and unless you are tweaking your workflow for a specific case you are always better off following the general rule.
Incidentally, at work we have a 300 k€ machine based on a PMT which turns out to be less sensitive than its more recent counterparts with photodiode detectors. I can assure you that not all photodiodes are noisy, far from it: it's all a matter of grade, not of technology!
ADDENDUM: I forgot to mention it, but what I wrote about film grain is valid only if you consider grain as noise. It is also common practice to consider it as part of the contents, i.e. useful signal, in which case you will want to keep it as intact as possible. In this case, you will only consider the scanner noise to determine the number of bits that you need - which amounts to always using 16 bits, unless your scanner is really lousy.
Last edited: