This has nothing to do with grain really. The answer to this argument is very simple and only requires reading the datasheet of the B&W film stock which you are scanning. For example, the T-Max can achieve a density in excess of 3.0 no matter the grain - see Kodak's F-4016 document. This corresponds to a dynamic range of 1000:1. A perfect (i.e. noise-free, which never happens) 8 bits converter can only achieve 256:1, so it does not allow to recover the full range from your negatives, with a particular problem in the high values (i.e. the "almost white" parts) of the positive image which easily get saturated. OTOH, 16 bits are able to encode a 65536:1 dynamic range, meaning a density up to 4.8. Note that actual converters do not usually provide 16 bits: ordinary flatbed scanners can provide, let's say, 10 "real" bits or a little more. That's a 3.0 density, barely sufficient to digitize the full information that lies in your negative. Hence the importance of checking the maximum density of the scanner, which must exceed that of your films, and of saving to 16-bit files.
When considering these things it is important to distinguish large-area vs. pixel-scale signal variations. It is the pixel-by-pixel noise that is important for this discussion. For example, it is entirely possible to cover a high dynamic range (as measured over a large image area) with a low-bit ADC, provided that the variation in density over a small region is large.
If you scan a featureless area of a negative, such as a featureless sky, you will find that there is quite a bit of pixel-to-pixel variation in the signal. This can be caused by what is in the film itself (such as grain) or the sensor itself (for example, thermal noise in the electronics), or at extremely low light levels one can even be subjected to shot noise. If the the standard deviation of the pixel-to-pixel noise is comparable to the ADC step size then the benefit in going to a smaller ADC step size is negligible. In fact, if the pixels are small enough (then under certain conditions) a one bit ADC is sufficient to recover all of the information in the image.
All this is pretty well known among the signal processing community. I am not a member of that community, but I have been close enough to it in my professional capacity to have learned many of the core concepts.
A bit of a wild card may occur if the negative is so dense that almost no light comes through. In that case the variation in pixel-to-pixel grain in the negative itself could be less than the ADC step size, and in that case you could lose information. However, consumer-grade scanners use photodiode detectors, and I am pretty sure that there is enough electronic noise in the detectors that the noise level is greater than the ADC step size. (I am prepared to be proven wrong on this by actual experimental results.) If the noise in the detector and associated electronics is comparable to the ADC step size then there is no advantage in going to an ADC with a finer step size.
As an interesting point, drum scanners generally use photomultiplier tubes (PMT) for detection. The noise level in most PMTs is EXTREMELY low. For example, a Hammamatsu H11870-01 has a typical dark count rate of 15 photons/second. This is probably a lot lower than the dark count rate of PMT used in most drum scanners, but the point is that the dark count rate is extremely low in those detectors, and in such cases one needs to take the analysis to another level.
I intend to follow this post up with a post containing some sample calculations, but it could get into some math that might be distracting in this post, so I won't include here.