For color or black and white?
EDIT: for color, the debayering process smears quite a bit of fine detail, so there really isn't such a thing as grain aliasing with a CFA sensor, so it’s essentially the same as taking the original picture with a DSLR, you essentially have a CFA image that you have to do some post processing on to get an RGB image. The difference is you effectively have the worlds most awesome film simulation because you're actually shooting it on film. In those instances, more resolution absolutely is better, though I’d rather have a higher resolution CFA sensor and get more res in a single capture than do the stitch route. For my scan process, all my code does the inversion, linearization, and color space conform on the raw bayer samples, and what gets fed in LR is a linear floating point bayer array. All of the spatial image detail that made it onto the sensor from the film is still there and intact. You’d be amazed at how much spatial detail gets eaten up if you debayer a scan, then do all your post processing to get a color positive as opposed to doing all your post to get a color positive on the raw bayer array then feeding the color positive bayer array to LR to debayer. You also get a lot less color pollution between the channels because your negative’s red and green and blue values aren’t being interpolated into adjacent pixels to make an RGB image before you try to invert and remove the color mask and adjust the contrast of each channel. It’s way better to do all that before you debayer.
For black and white, since I’m taking an image of something that has no color information, the only color information that is hitting the bayer array is from my light source. I white balance that out on the raw bayer samples and proceed to treat the bayer array directly as a monochrome image. In that instance, I really am sampling 6000+ samples over the width of a 36mm frame and 4000+ samples over the 24mm height for 135 film because there is absolutely no debayering going on. What gets fed into LR is a linear floating point monochrome image with all of the spatial information that hit the bayer sensor intact.
When people scan their black and white film with a DSLR and marvel about the huge image quality boost over a flatbed, they’re seeing an image that has been white balanced and debayered. 50% of a bayer arrays sensels are dedicated to red and blue, so when you white balance and then debayer, you’re polluting 50% of your captured spatial information through the debayering process to arrive at rgb values of an image that doesn’t actually have RGB information in it. Stop and noodle on that for a minute.... And people are declaring the minimum acceptable resolution they’re willing to accept based on that and come up with ever more complex ways to do the initial physical scan to try to get more resolution?
Umm... OK... if that’s really the route you want to go, then by all means, don’t let me stop you. How you handle the raw sample data makes a really big difference in the output. I’d rather have a reasonably high resolution source and optimize how I handle that before making any judgment calls about whether to either go get more raw sensor resolution, or do other things like scan and stitch.
The same goes for dynamic range of the film. The film is effectively gamma encoded. The actual density range fits well inside what a modern DSLR can capture in a single capture. How you reverse that gamma to get linear makes a big difference.