I am following up on the issue of scanning in 8 bits vs. 16 bits. As I mentioned earlier, there is nothing wrong with scanning in 16 bits except for two points. 1) It uses more hard disk space. That's not very important these days with huge hard disks available. 2) Some scanners won't scan in 16 bits with driven by certain software packages. For example, one of the best desktop scanners ever made in terms of scan quality is the Leaf 45. However, if you run it from Silverfast software on a PC it will only scan in 8 bit mode. (Correct me if I am wrong. I am doing the Leaf/Siverfast info from memory.) It is therefore a relevant question to ask whether more bits would give a better scan of a black and white negative. The answer would be "yes" if there was such thing as a perfectly grainless negative, but the answer is "no" if the grain in the negative has a density variation that is comparable to the ADC step size.
I found some images that illustrate what I was talking about. I found them in the thesis I referenced in an earlier thread.
https://uwspace.uwaterloo.ca/bitstr...d=1830F1F5528E43CDB940023C1BD664E2?sequence=1
The first image is the original image. The second image is the first image digitized with a simulated analog to digital converter. Only three bits are used in order to emphasize the effect. The third image is also digitized with a simulated three bit analog to digital converter, but first some noise is added.
As you can see, there is a terrible banding problem in the second image. The banding problem is gone in the third image. The cost is that there is noise. (Of course, if the noise came from grain in the original image then most of the noise would be there anyway.)
As I mentioned, this was done with a simulated three bit A/D converter, which exaggerates the effects. An 8 bit digitization would show more subtle effects, both with respect to banding and with respect to the amount of noise required to defeat banding.
I should note one other thing. I mentioned that the amount of noise needs to be comparable to the ADC bit size. Actually, if the standard deviation of the noise is about half of the ADC bit size it pretty well takes care of banding, and if the standard deviation in the noise is about a third of the ADC bit size then even under those conditions you aren't likely to notice banding.
Dynamic range has been mentioned. This is not a simple cut-and-dried issue. In fact, an 8 bit ADC can represent a dynamic range of far greater than 8 bits, provided there is some noise added to the signal before digitization and provided that you are looking at large swaths in the image rather than at single pixels. A key word is "ditther". Oversampling also helps. This same principle is applied in audio recording. I am pretty sure that some recording systems use a 1 bit ADC, with some noise added before the ADC and with the ADC running at an ultra-high sampling rate. When that data stream is converted to a lower effective sampling rate you get much more than 1 bit of dynamic range in the digital audio signal, and the noise that was added ends up being too low to hear. Here's a link to paper that is closely related to this concept.
https://www.sonicstudio.com/pdf/papers/1bitOverview.pdf
And here is what it says at wikipedia in the article about audio bit depth: "With the proper application of dither, digital systems can reproduce signals with levels lower than their resolution would normally allow, extending the effective dynamic range beyond the limit imposed by the resolution. The use of techniques such as
oversampling and
noise shaping can further extend the dynamic range of sampled audio by moving quantization error out of the frequency band of interest." The same principles apply to digital processing of images.