Thanks, Alan, I'll just continue to scan everything at 16 bits and not worry about the storage or banding. The only issue is that for some who own Elements or other similar programs, they only operate for the most part in 8 bits. So I use Lightroom for my editing which handles 16 bit.The important thing is whether there is noise in the scan. The noise could come from a combination of film grain and other sources. The other sources could include (but not limited to) sensor noise or even shot noise.
Shot noise comes from the quantized nature of light in combination with the statistics of photon detection, and in theory it can show up at low signal levels. I don't know if shot noise is a factor in scanners, but I would not be surprised if it is. To give you an idea of how the statistics work, the standard deviation is equal to the square root of the number of photons. For example, if 100 photons are detected then the standard deviation is 10 photons. Suppose that the analog to digital step size at low signal levels is equivalent to ten photons. In that case shot noise would be more than enough to produce effective dithering. It would also apply at high signal levels. For example, if a high signal level were equivalent to detecting 10,000 photons the standard deviation would be 100, so if the step size were 10 photons it would be more than enough to produce effective dithering.
Sensor noise (as distinct from shot noise) is very likely at low signal levels. This can come from various sources, primarily electronic in nature. For example, there is something called thermal noise, which comes from the random thermal motion of electrons. There is also something called flicker noise. Anyway, a lot of scanner reviews talk about the existence of noise in the shadows. This probably comes mostly from some combination of sensor noise and shot noise.
None of the noise sources mentioned above go away at high signal levels. They become proportionally less important compared to the signal level if the signal level is high, but from the point of view of how it relates to the effective dithering effect the important comparison is not the comparison to the absolute signal level but rather the comparison to the step size of the analog to digital converter. This means that if there is effective dithering taking place at low signal levels it will also be there at high signal levels.
Things get a little more complicated if a non-linear transformation is applied to the signal somewhere in the signal chain. If so then it is possible that the effective dithering effect might go away due to roundoff error. I am told that when a scanner saves in 8 bit mode there may be a non-linear transformation, and I can't say to much about that possibility. However, I understand that in some signal processing systems when a high-bit word is converted to lower bits the software may add pseudo-random noise in order to make sure that dithering is present. I don't know if this applies to 8 bit scanners.
Now to the question of t-max 100: here is one way you can investigate this experimentally using your own scanner and your own photographs. I'm not talking about your regular photos but rather photos generated specifically for the purpose of testing this. Get some t-max film. Find a perfectly uniform object to photograph, like a blank wall. It will also help if you will defocus the lens to make sure there's no small-scale variability in the image. You might even consider taking the lens off altogether. Take photos of the image. Some at high exposure, some at moderate exposure and some at low exposure. Develop the film.
Scan the images in 8 bit mode. Open a file. Look at the histogram of crop of a very small portion of the image near the center of the image. Zoom in on the histogram (along the horizontal axis) and see if there is just a single spike in the histogram or if you can see several spikes. If there are several spikes then there is significant noise in the image (whether from film grain, sensor noise, or some combination), and that should satisfy the effective dithering requirement. Do this for the blank images that you photographed at various exposure levels.
You could also take a photo of an image having a smooth brightness gradient. Scan in 8 bit mode. Copy the image to form a second identical file. Open one of the copies and do some extreme image manipulation until you can see banding. Next open the second copy and convert it to 16 bits. (That conversion is very important. Leaving it in 8 bit mode will nullify the test.) Next do exactly the same extreme image manipulations that you did on the first file. Do you see banding? if not then the 8 bit scan hasn't hurt anything and you are good to go.
Always remember, if you scan your regular photos in 8 bit mode then be sure to convert them to 16 bit before you do any image manipulation. Keep it in 16 bet mode, preferably forever, but if not forever then at least as long as you are doing any image manipulations on the image.
Now for a question: Why would anyone even want to scan in 8 bit mode? The first answer is that some systems may only allow 8 bit scanning. Leaf brand scanners when used in certain configurations fall into that category. The other reason (not a very strong on these days) is that 8 bit scans save storage space.
Many years ago I read a comment that the dual scan kind of cancels out the grainy look as the two images are combined. I suppose you could do the same thing afterward with "blur" or sharpening edits.Alan, this is a very interesting response. When I scan black and white negatives, I always use 16 bit, and sometimes do see what looks like noise in the shadow areas. This is worst with thin negatives. My Plustek 7600i scanner allows for multi scan, which is supposed to reduce noise. I am not sure how effective it is. My Minolta Scan Multi scanner lets you choose one, two, four, eight, and 16 multi-passes. I usually use four, but even here, I am not sure how effective it is in reducing noise, or if I am really seeing anything different. I use Silverfast 8 to operate both units.
The two films have different spectral sensitivity and a somewhat differently shaped characteristic curve. T-Max 100 also has a UV blocking layer, while T-Max 400 does not.Regarding Tmax 100 vs Tmax 400, I have to assume it really is more grain in the 400 that you can see in the sky. I assume other than that the film stocks are the same. Actually, I kind of like the grain look in the Tmax 400 scans. I assume it's more pronounced as well in smaller formats than the 4x5 sample I posted
Alanrockwood, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.For an ultrafine grained image using, for example, microfilm filmstock developed in a pictoral mode it might be possible for the grain structure to be so fine that banding might be possible in an 8 bit scan. Slide film might also be a different story in some cases, though I couldn't say one way or the other. However, for normal black and white negative film stocks I'm pretty sure that 8 bit scanning is sufficient. However, be sure to convert to 16 bit before doing any image manipulation.
I do scan at 16 bit on 35mm, medium format and 4x5. My color scan files for 4x5 are around 600mb. But I was wondering if someone did scan at 8 bits, could they get banding if they cropped the image? Is it more of an issue with medium format? 35mm?Alan, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.
As far as the possibility of banding is concerned it doesn't really matter if you crop, as long as there is enough grain (or other forms of noise) in the scanned image. The grain (or other forms of noise) supplies a dithering effect that eliminates the possibility banding.What if you crop?
Yes, I mean contrast adjustment, or whatever manipulation might be applied to the image that would result in banding, though I think it all basically boils down to contrast manipulation.Alanrockwood, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.
Hi:
I've had my Bronica GS-1 for just shy of a year and am finally getting to work on scanning the 12 or so B&W rolls I've developed. My computer was purchased as a Win10 machine but I've added Linux to a separate partition so I'm running both (at different times of course). My father, before he passed away, gifted me his Epson V600 that I'm running on Win10 (because Linux film scanning isn't that great at the moment). I tried Epson Scan software and it worked but something made me want to try something different. VueScan was better and then I tried the free Silverfast 8 and it is great, I'll probably buy version 9 SE or SE Plus.
However, in version 8, the grey scale option is shown as 16->8bit. I'm quite happy with the scans but I'd really like to scan to 16bit GS, not 8bit.
Does anyone know if version 9 scans to 16bit?
Thanks
8 bit scans will require only half of the storage space. That's the main advantage I am aware of. Actually, some scanner models only work in 8 bit mode with certain acquisition systems. I think this is the case for leaf scanners when using Silverfast software, but I'm not sure. In that case there would be an advantage to 8 bit mode because 16 bit mode isn't even available.Besides banding, are there other advantages or disadvantages with 8 vs 16 bit?
8 bit scans will require only half of the storage space.
Very interesting.That's not strictly true. Eight bit images will contain less data, true, but TIFF supports compression. You would think 8 bits would compress much better, as there are only 256 values being repeated, but if you split 16 bits in half, you've still only got a total of 256 values, just twice as many-- but the more repetitions you have, the better the compression is.
In short, there is a benefit to having only 8 bits per pixel, but not as much as you might expect.
Don't 16 bits also give better color information?
If you use multi scans it is possible, under certain conditions, to negate the ability of noise to suppress banding.Alan, this is a very interesting response. When I scan black and white negatives, I always use 16 bit, and sometimes do see what looks like noise in the shadow areas. This is worst with thin negatives. My Plustek 7600i scanner allows for multi scan, which is supposed to reduce noise. I am not sure how effective it is. My Minolta Scan Multi scanner lets you choose one, two, four, eight, and 16 multi-passes. I usually use four, but even here, I am not sure how effective it is in reducing noise, or if I am really seeing anything different. I use Silverfast 8 to operate both units.
Don't 16 bits also give better color information?
Nothing wrong with doing that, except for the extra storage space it requires.There seem to be so many "except fors" that it would seem to be better to just scan 16 bits and be done with it.
Can you explain what you mean, preferably with links to examples to look at?65,356 discrete values vs 256 per pixel. alanrockwood's analysis isn't wrong, but resolution also plays a factor. 60 DPI vs 300 DPI will produce very different appearing images.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?