I found a website that demonstrates what I have been talking about in a slightly different way. It relates to how scanning with a low bit depth (e.g. 8 bit) can still be just fine and won't result in banding if there is some noise (including grain) in the image.
Here's the link.
https://homes.psd.uchicago.edu/~ejmartin/pix/20d/tests/noise/noise-p3.html
First a little bit of explanation or background info. In my most recent post I presented the concept in the form of a graph. (For presentation purposes noise is removed in the graphs to show the underlying gradient or stair-step structure, without the noise showing up in the figure.) The link above presents the concept in a more visual form.
The context of the link is slightly different. The author of that web page presents it in the context of how noise, such as sensor noise, affects an image captured by a digital camera, whereas my discussion is presented in terms of how grain affects a scanned image. However, the underlying principles are the same.
Here's a copy of the first image from the link. It is a smooth gradient artificially generated in photoshop in 8 bit mode. Note that the gradient appears smooth and noiseless.
The histogram of the image is shown n the inset. Note that the histogram is not perfectly smooth. This is probably because photoshop put in a tiny bit of noise when it generated the gradient so that banding won't show up if extreme image manipulation were to be performed later. For our purposes we can ignore that small detail and assume that the histogram is perfectly smooth.
Here's the same image with some noise added. For comparison purposes to my graph the amount of noise would be equivalent to sigma=1.46. I didn't include a trace in my graph for sigma that large. (I only went up to sigma=0.5), so this amount of noise is actually more than I simulated for that graph. However, it's comparable to the sigma that I extracted from a tmax scan discussed in an earlier post.
The next image is the same as the last one except that the bit depth has been decreased from 8 bits to 5 bits. This was done by truncation of the lower order 3 bits. This is not quite the same as rounding to the nearest step, but for our purposes we can ignore that subtle issue.
The two images are, for all practical purposes, indistinguishable, in accord with the point I have been making that if grain is visible in an 8 bit scan then there is virtually nothing to be gained by going to a 16 bit scan.
The next image is a composite one showing what happens when the step size is increased, i.e. the digitization bit depth is decreased.
The top sub-image shows what happens when the smooth noiseless 8 bit gradient is converted to a 5 bit gradient by truncation of the lower three bits. This is similar to what would happen if you digitized a perfect gradient using a 5 bit word. Banding is strong. This is basically a repeat of information shown earlier in this post.
The next sub-image down shows what happens when noise is added before the conversion to a 5 bit word length. This is basically similar to what would happen if the gradient were digitized with a 5 bit word, and it is also essentially a repeat of what was shown earlier in the post.
The next sub-image shows what happens if the perfect gradient is digitized with a 3 bit word. There are fewer bands, but they are more distinct than the bands digitized with a 5 bit word.
The next sub-image shows what happens to the noisy version of the gradient after it is digitized with a 3 bit word. This one would be equivalent to the standard deviation of the noise being equivalent to 0.32 relative to the step size of the 3 bit word. This is pretty close to one of the traces in the figure I posted in my previous post (sigma=0.3). Banding is still pretty much negligible, meaning that it is not evident to the casual observer. It's unlikely that it would even be noticed to a careful observer unless it were compared to a better result in a direct A to B comparison or unless one already know what to look for. A careful comparison reveals that the noise is a little higher in the part of the gradient where step transitions would occur when digitizing the perfect gradient, and noise is a bit lower in between those regions. My impression is that the noise might be a tiny bit higher overall, but I couldn't say for sure.
Finally, we see the results of digitization with a 2 bit word. Banding is is very strong and broad in the second to last sub-image which derived from the noiseless gradient. Banding is also strong in the last sub-image, which was digitized from the noisy gradient. However, the transition region between the bands is softened by fairly steep but noisy and fairly narrow gradient regions. This one is comparable to sigma=0.14, which is intermediate between two of the traces in the figure in my last post.
Anyway, hopefully this give everyone a better intuitive feel for how noise (or grain) can provide a dithering function that can suppress banding. Please not that when scanning film it is not necessary to add noise to accomplish dithering. In most cases the noise will already be there through the combination of film grain, sensor noise, and electronic noise.
The bottom line is that in most cases (practically all cases, except perhaps for rare instances that are not typical, such as perhaps the scanning of images derived from microfilm processed to produce a pictorial quality result) 8 bit scanning will be sufficient to capture all of the significant information present in the image without the risk of banding. There is nothing wrong with scanning in 16 bits, except for doubling the file size, but for practical purposes there is nothing to be gained by scanning in 16 bits either.
By the way, earlier in this thread someone mentioned that a 16 bit scan might not be twice as big as an 8 bit scan if the TIFF files are compressed using lossless compression. I tested this idea by scanning the same image in 8 bit and 16 bit mode with or without compression. (It was in zip format for the compression.) The 8 bit compressed TIFF was, at 12.0 megabytes, a little smaller than the uncompressed TIFF, at 18.7 megabytes. However, the 16 bit scan actually expanded from 37.4 megabytes to 43.2 megabytes upon "compression".
A while back I issued a kind of challenge. I don't mean "challenge" in a confrontational sense, but in the spirit of exploration. The challenge was to follow certain workflow that started with an 8 bit scan to see if banding can be produced. The workflow was this, followed in the correct order. Scan in 8 bit mode. Read into an image processing program, such as photoshop. Convert to a 16 bit image. Perform extreme image manipulation. Look for banding. I am still hoping someone will take up the challenge. The results could be very interesting.