I'm a drum scanner operator. I scan my own 5x4 Tri-X to higher resolutions than Clyde - I work mostly with files in the 500+MB range, 16 bit grayscale, or 1.5MB 16 bit RGB if I'm using, say, 5x4 160PortraVC.
There are really two questions here. One is how much data is there, and one is how much information is there. Data and information are not the same thing, and are not interchangable.
What makes this hard to understand is the stochastic (think random) distribution of film grains of stochastic size in the film (this is the data), fs. the deterministic, fixed grid and fixed grid size of the scanner.
Basically, you can't image the film grain. It's just not physically possible with today's scanners. What you get instead is what I think off as digital grain. The scanner looks through the hole in it's grid and takes a sample - it measures the average value it finds there and assigns that value to this new pixel. This pixel doesn't look like film grain in any way - it's perfectly uniform and perfectly square.
Since you can't image the film grain, what are you really doing anyway? You are trying to get as much image information from the film as you can. Basically, the fine detail formed by tonal transitions. Film records this information using lots of film grain (the data), which is why scanning works at all. Ious, there's lots of data underlying the information - there's not a one-to-one correlation.
Given that background (simplified such as it is) there are three main schools of thought on scanning. First is to scan at about half the average grain size (say, 2000 ppi) which allows you to get the image information below the grain size. Second is to scan at about the average grain size (say, 4000ppi) to make sure you capture substantially all of the image detail. Third is you scan at approximately twice the average grain size (say, 8000ppi) and downsample so that you absolutely do get all the image detail you can.
For large format scanning, much of this discussion becomes a moot point as computers, OSes, and image editor programs are still largely 32 bit systems, and poorly implemented at that (remember when Bill Gates said that we'd never see a program as big as 640KA? Well, he didn't get any better at predicting ;-). The practical limit for a program like Photoshop today is about 1.5GB, and every operation on a file this size is terribly slow even on the fastest hardware available. I'm talking go-walk-the-dog slow.
This lets out option three right off - can't handle the file size. You can scan under option two, as long as you stick with 5x4 film. Anything any larger than 5x4 and you scan under option one. Just the way it is - it's not a choice we get to make.
My testing has convinced me that you get just about everything there is to get by scanning near average film grain size. So I'm thinking Clyde is being mighty conservative in his statements. We are a very long way from being able to replicate with digital capture what he can do with 10x8. And just forget about what he can do with a ULF camera. My prediction is that no one will even attempt that with digital capture. Not enough market size to warrant it.