When scanning a low contrast (high dynamic range) film like HP5+, especially if using a lower agitation development and a scene with flat light, which is the most extreme practical example I can think of, the histogram might span only 25% of the available brightness range. Assuming the user wants the end picture to cover 100% of the brightness range, by simply linearly applying black points and white points they are already losing 75% of the bit depth. That's the equivalent of 8-bits going to 6-bits, 12-bits going to 10-bits, or 16-bits going to 14-bits.
And that's only with a linear shift in the brightness range. Many photographers, after doing this first linear change, add an S-shaped contrast curve. You could easily lose another bit or two in depth depending on how mild or extreme you make this curve.
If you start out with an 8-bit image and end up practically with a 5-bit one, you have 32 shades of gray. That is well within the range of the human eye to perceive. If you're arguing that dithering can mitigate that, sure, but dithering is also bringing with it a reduction in detail. Resolution will also matter, as the negative effect of dithering is much more noticeable at the lower resolution level.
8-bits of gray is great when viewing images, but there is a reason people add an extra byte per channel when editing and expansion of a brightness range is going to take place.
You raise some good points about what can happen during image processing using different transformations of the image. However, consider also the following points.
To be clear, in my discussion, in all cases when I refer to noise I mean grain plus other sources of noise, such as electronic noise.
The first one is whether noise is present to a significant degree in the post-transformed image. If the answer is yes then the dithering remains effective and it will prevent banding. If the answer is no then the dithering has lost its effectiveness and banding can occur. Are there examples in pictorial photography with scanned images where this is actually observed in practice?
The second is that one can mitigate the effects you describe by converting the 8 bit scanned image to a 16 bit image before any image processing is performed on the image. This will preserve the dithering effect.
One can, of course, question whether it makes sense to scan in 8 bits if one is going to convert to 16 bits before doing image processing. There are several related answers to this question. One is that is that if there is noise in the scanned image (including grain) then it is a waste of storage space to scan in 16 bits because there is nothing to be gained by doing so.
The second is that in some cases the scanner may be limited to 8 bits. I gave an example in an earlier post.
The third is that a more complete description of a workflow could look something like this. Scan in 8 bit then convert to 16 bit for image processing and finally (optionally) convert back to 8 bit for the final image to be printed. Only the initial 8 bit scan and 8 bit final image would be stored permanently. Since the original 8 bit scan would be stored permanently one has the option of recreating the image processing chain later on, or create a new image processing chain later on without loss of quality or wasted storage space. Since the human can't discern intensity variations finer than 8 bits then storing the final image in 8 bit format is all that is needed for storing the final image.
In terms of dithering, I am not necessarily saying that dithering should be added. I am saying that dithering is (usually, or even always) automatically present in a scanned image because of film grain and other forms of noise, so additional dithering is not necessary. There might be exceptions to this. For example, if one is using super-duper-ultra-fine grained film then there might not be enough film grain present in the scanned image to provide a built-in dithering function. Are there any such films in common use. I don't think that even T-max 100, Delta 100, or Acros 100 would qualify because, although they are fine-grained film, there is still some grain evident in the scans.
You also talk about resolution of the scan, and that is a valid point, one which I did not bring up in this thread, though I believe I have discussed that issue in past posts in other threads. To preserve grain in the scan one should always scan at a resolution sufficient to maintain the grain. (I think its always a good idea to scan at the highest resolution that the scanner allows anyway.) Also, there are two issues related to scan resolution. One is the dpi of the scanner and the other is the optical resolution of the scanner. I won't go deeper into the interactions between those two concepts in this post except to say that in principle one or the other could be limiting, and whichever is the limiting factor will determine how much grain is suppressed because of low resolution.
It's also worth keeping in mind that the image in a film is essentially a one bit image. In other words, if you could scan the film at a resolution of a bazillion the scan could be stored in 1-bit words, equivalent to "silver present" and "silver absent". You began discussing this concept in another post, so I know you understand this concept, but I am emphasizing it in case some people might not have thought this issue through. When scanning at lower resolution one is essentially lumping what would be a 1 bit super-duper-ultra-high resolution image into a lower resolution image that takes more bits of digital resolution to record in each pixel. (For the super-scientific types who might read this, I am ignoring the wave nature of light in this discussion.)
Anyway, other than saving storage space I am not necessarily saying that there is something wrong with scanning at 16 bits, but I am saying that there is nothing to be gained by scanning at 16 bits if 8 bits is sufficient to capture the noise level (including film grain) of an image.