Generally there be one of the channels that gives the sharpest image. The preferred channel might depend on the brand of scanner. As I recall, for my canon FS4000us scanner it is the green channel.
On the question of 8 bit vs. 16 bit, there has been a lot of discussion on that topic. The best thing to do would be to test it on a bunch of your images to see if you can tell a difference.
The concern about using 8 bit scanner seems to mostly revolve around the issue of banding in the image, especially if you do some extreme image manipulation after the scan. This is a very legitimate concern for a pure grainless image. If there is any graininess in the image (or other sources of noise in the system, such as detector noise) then the advantage of scanning in 16 bit is less clear cut. Let me state this in a little stronger way. If the noise in the image (i.e. film grain and any other noise that is included in the scanned image, such as sensor noise) is about the same as the ADC step size of the digitizer then 8 bit vs. 16 bit becomes a non-issue. You won't see banding or related artifacts in the final image because they are smoothed over by the noise. This concept is well understood in the field of digital signal processing. In some signal processing systems noise is actually added to a system in order to wash out the effect of ADC step size. If the signal is an image then noise washes out the potential effect of banding. Note that this only works well if noise is added to the system BEFORE the ADC process takes place. This condition is satisfied in a scanner if the noise comes from film grain, sensor noise, and electronic noise, such as thermal noise, prior to the ADC
Let me refine this concept just a bit more. If you scan in 8 bit then whenever you start doing digital image processing you should always convert the image to 16 bit and keep it at 16 bit all the way through the process, including any intermediate saves and the final save. This is to eliminate issues due to roundoff error.
This analysis of course goes out the window if you have a "grainless" negative, i.e. a negative where the variability in the image density due to grain is less than one about one ADC step size. Interestingly, this also depends somewhat on the pixel size of the scanner and any smoothing that can occur due to resolution limits in the scanner. If the pixel size is very small and there is no blurring of the image due (for example) to optical limitations then you can get away with a smaller word size. If the pixel size of the scanner is large and/or there are other significant blurring mechanisms in the system then you need to use a larger word size. If the pixel size of the scanner is small enough then a one-bit ADC would be good enough, but as a practical matter we would never get to that level.
Note: when I say "pixel size" I mean the size of a pixel as projected on the negative, not the actual size of a pixel on the sensor itself, although the two are obviously related through a magnification factor.
I previously dedicated a whole thread to the issue of 8 bit vs. 16 bit scanning, including posting of numerical simulations and some images from another source to demonstrate the concepts. It was, shall I say, not universally accepted here at photrio, but it is nevertheless true.