alanrockwood
Member
- Joined
- Oct 11, 2006
- Messages
- 2,184
- Format
- Multi Format
Can we discuss bit depth for scanning?
Here is my understanding. For noiseless continuous tone images the more bits the better, up to a point. As a rule of thumb, 8 bits usually works OK if there is no image manipulation, but if there is much image manipulation then greater bit depth is worthwhile because it can avoid posterization.
However, film does not approximate a noiseless image because the image has graininess. Here is my hypothesis. If digitization step size is much smaller than the noise (as measured by the standard deviation...) then having a smaller step size (i.e. more bits crammed into the same difference between maximum and minimum signal) doesn't add any significant information. In this case, larger bit depth (e.g. 12 bits) isn't really any better than lesser bit depth (e.g. 8 bits).
The pixel size makes a difference. If the pixel size is big then you need more bits to avoid artifacts, such as posterization. This is because in a large pixel you get some signal averaging effect, which decreases the noise, and requires finer bit gradation to get the step size under the standard deviation. If the pixel size is small then you can get away with fewer bits because the standard deviation of the pixel is greater. (When I say "standard deviation" I mean the randomness in a small pixel sampled from a larger image region of nominally constant image intensity.) If the pixel size were small enough you might get away with one or two bits, especially for black and white film, though such small pixel size is not practical.
What do you think?
Here is my understanding. For noiseless continuous tone images the more bits the better, up to a point. As a rule of thumb, 8 bits usually works OK if there is no image manipulation, but if there is much image manipulation then greater bit depth is worthwhile because it can avoid posterization.
However, film does not approximate a noiseless image because the image has graininess. Here is my hypothesis. If digitization step size is much smaller than the noise (as measured by the standard deviation...) then having a smaller step size (i.e. more bits crammed into the same difference between maximum and minimum signal) doesn't add any significant information. In this case, larger bit depth (e.g. 12 bits) isn't really any better than lesser bit depth (e.g. 8 bits).
The pixel size makes a difference. If the pixel size is big then you need more bits to avoid artifacts, such as posterization. This is because in a large pixel you get some signal averaging effect, which decreases the noise, and requires finer bit gradation to get the step size under the standard deviation. If the pixel size is small then you can get away with fewer bits because the standard deviation of the pixel is greater. (When I say "standard deviation" I mean the randomness in a small pixel sampled from a larger image region of nominally constant image intensity.) If the pixel size were small enough you might get away with one or two bits, especially for black and white film, though such small pixel size is not practical.
What do you think?