Scanner bit depth

Touch

D
Touch

  • 0
  • 0
  • 2
Pride 2025

A
Pride 2025

  • 0
  • 0
  • 51
Tybee Island

D
Tybee Island

  • 0
  • 0
  • 55
LIBERATION

A
LIBERATION

  • 5
  • 3
  • 119

Recent Classifieds

Forum statistics

Threads
198,346
Messages
2,773,323
Members
99,597
Latest member
AntonKL
Recent bookmarks
0

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Can we discuss bit depth for scanning?

Here is my understanding. For noiseless continuous tone images the more bits the better, up to a point. As a rule of thumb, 8 bits usually works OK if there is no image manipulation, but if there is much image manipulation then greater bit depth is worthwhile because it can avoid posterization.

However, film does not approximate a noiseless image because the image has graininess. Here is my hypothesis. If digitization step size is much smaller than the noise (as measured by the standard deviation...) then having a smaller step size (i.e. more bits crammed into the same difference between maximum and minimum signal) doesn't add any significant information. In this case, larger bit depth (e.g. 12 bits) isn't really any better than lesser bit depth (e.g. 8 bits).

The pixel size makes a difference. If the pixel size is big then you need more bits to avoid artifacts, such as posterization. This is because in a large pixel you get some signal averaging effect, which decreases the noise, and requires finer bit gradation to get the step size under the standard deviation. If the pixel size is small then you can get away with fewer bits because the standard deviation of the pixel is greater. (When I say "standard deviation" I mean the randomness in a small pixel sampled from a larger image region of nominally constant image intensity.) If the pixel size were small enough you might get away with one or two bits, especially for black and white film, though such small pixel size is not practical.

What do you think?
 

Doyle Thomas

Member
Joined
Oct 28, 2006
Messages
276
Location
VANCOUVER, W
Format
8x10 Format
I think you are overthinking. bit depth represents the color of a single pixel. it is the difference in pixel color that contains the image including the noise.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
If you make the pixels small enough they could be represented by a black or white dot - it's called dithering. See https://en.wikipedia.org/wiki/Dither and https://en.wikipedia.org/wiki/Dither

I believe that the graininess of film has an effect very much like dithering, and therefore, high bit depth when scanning is probably not necessary. However, I am prepared to be proven wrong.

Elaborating a bit, (bad play on words) if an original scene is perfectly smooth, like a cloudless blue sky, when it gets recorded on film the effect is like dithering because of the graininess of the film. At that point there is little point in scanning with a high bit depth. However, I am not sure how this would apply to the extremes of the image (deep shadows or extreme highlights). If an extreme highlight or shadow is recorded as an essentially grainless image on the film then the argument might not apply.
 

Prof_Pixel

Member
Joined
Feb 17, 2012
Messages
1,917
Location
Penfield, NY
Format
35mm
8 bits per pixel is generally enough in the final image. Where you need to scan at a higher bit depth is when you need to do image manipulations (like curve adjustment).
 

Doyle Thomas

Member
Joined
Oct 28, 2006
Messages
276
Location
VANCOUVER, W
Format
8x10 Format
I have yet to scan an image that did not need a lot of editing to bring back the DR using black and white points, color, contrast etc.

IDK what scanner ur using but most offer some form of black and white points, color, contrast etc edits to get u in the ballpark or better. it is possible that ur scanner DR is not wide enough, u could make a scan for highlights, another for shadows and blend.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Maybe this will help explain what I mean. Assume that a trasmission through the film of 100% gives a detector signal of 255 mV.

Now assume a camera took a photo of a perfectly featureless object, such as a small patch of blue sky on a cloudless day.

Now assume that the average transmission on the developed film gave an average detector signal of 16 mV, which corresponds to a transmission of approximately 6.25%.

Now assume that because of the graininess of the film the pixel-to-pixel standard deviation on the detector of the scanner is 2 mV, which is 12.5% of the average transmission, or in other words the graininess is relatively mild.

Now assume that the sensor of the scanner is connected to two digitizers. One is an 8 bit digitizer, so a 1 bit step on the digitizer corresponds to 1 mv. At full scale this would be 255 mV. The other is a virtually resolution digitizer, but it is scaled so that at full scale it reports a value of 255 mV, i.e. 100% transmission generates an ADC value of 255. (This implies that the ADC is capable of reporting sub-mV gradations.)

Here is a plot of the output of the two digitizers for 128 consecutive pixels.
Doc1_01.jpg
As you can see, there is no noticeable difference between the two cases. In other words, for all practical purposes, the 8 bit image is just as good as the infinite resolution digitizer, where "resolution" means amplitude resolution, not spatial resolution.

If we smooth the results a little to simulate the fact that at normal viewing distance one would not normally see individual pixels the curves look like this.
Doc2_01.jpg
As you can see, there is even less difference between the two cases. This assumes that after digitization we have converted the files to a higher bit format (say 16 bits) before doing the smoothing, and we have preserved the relationship between relative signal and mV at the detector.

This convinces me that as long as the step size of the digitizer is somewhat less than the noise of the image then going to smaller step size (i.e. higher bit digitizers) does not gain you anything significant. This assumes that you do a conversion to higher bit images before doing any post-acquisition image manipulation.

The noise can be from graininess of the film or electronic noise or any other combination of noise sources.
 
Last edited by a moderator:

Alan Klein

Member
Joined
Dec 12, 2010
Messages
1,067
Location
New Jersey .
Format
Multi Format
IDK what scanner ur using but most offer some form of black and white points, color, contrast etc edits to get u in the ballpark or better. it is possible that ur scanner DR is not wide enough, u could make a scan for highlights, another for shadows and blend.


Doyle I scan flat (no auto or manual settings) and do all editing in post to avoid having to rescan if the edits results are not acceptable. By scanning flat, I do it once. One scan captures the complete range of the DR but the scan compresses the range. For example, I might see a range of 10-150 on a histogram with the scan results. Levels (black and white point adjusts) restores the scan to its proper range. Sometimes I will close in the black and white points before the scan to position that won't clip at either end. Then make final adjustments in post. People have said I'll get more data that way but I'm not convinced it makes a difference with scanning flat. I'm using an Epson V600 flat bed.
 

hsandler

Subscriber
Joined
Oct 2, 2010
Messages
471
Location
Ottawa, Canada
Format
Multi Format
About the grain being like noise, I think you are correct. Once the bit increment becomes less than the variation in a constant tone due to the grain, it does not add any information to have smaller step size.

On the hypothesis that larger pixels require more bit depth because there is less noise, I think there are two comparisons: two scanner technologies, or one scanner at different resolutions. For the former comparison, Take, say, a flatbed scanner vs. a higher resolution drum scanner. The pixels on the flatbed really are averaging the tones of a larger sample of the film, but there is also less precision or repeatibility in the sensor from sample to sample of an even toned area, i.e. more sensor noise than the photomultiplier tube sensor of a drum scanner, so you may not require high bit depth on the flatbed because the electronic noise may dwarf the grain noise when the grain noise gets averaged out.

On the second type of comparison, I think when you set a flatbed scanner at a lower resolution, less pixels per inch, it is not averaging over bigger sample areas of the film; it is just taking a sample of as small an area of film as it can resolve optically and then stepping over a larger distance to take the next sample. in this case, the noise is not being averaged compared to a higher res scan from the same scanner, so bit depth requirement does not change.
 
Joined
Jul 13, 2006
Messages
266
Location
Europe
Format
Multi Format
@alanrockwood:

Too much theoretical stuff. Just scan a negative @16/48 bit, and then @8/24bit. Open the 8bit image in an editor. Perform any modification, i.e. curves, and watch the histogram: It looks like a comb with missing color information. Now open the 16bit image and perform the same adjustment - the curve remains smooth.

Then print both variants at hi res. The 8bit image will show banding, while the 16bit image will not.

BTW, I digitize my LF prints @16/48bit @ 4.000 ppi and have them printed in large formats ranging from 120x80 centimeter to 3 x 2 meter (4 feet x 2.6 feet to 10 feet x 6.5 feet). There is hardly any visible grain.

This is practical and real world stuff, not the pixelpeeping on displays at whatever percent.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
@alanrockwood:

Too much theoretical stuff. Just scan a negative @16/48 bit, and then @8/24bit. Open the 8bit image in an editor. Perform any modification, i.e. curves, and watch the histogram: It looks like a comb with missing color information. Now open the 16bit image and perform the same adjustment - the curve remains smooth.

Then print both variants at hi res. The 8bit image will show banding, while the 16bit image will not.

Given the work flow you described the result you describe is reasonable. However, if you do as I said in an earlier post:

"...This assumes that you do a conversion to higher bit images before doing any post-acquisition image manipulation..."

you will not see banding, provided the noise in the acquired image (grain, sensor noise, etc.) is greater than the digitization step size.
 
Joined
Jul 13, 2006
Messages
266
Location
Europe
Format
Multi Format
You mean you scan @ 8bit, then convert to 16bit?

Where does the bandwidth come from? The universe, the air, your coffee machine?

Look, if you scan an ebony and ivory colored checkerboard @1bit (pure b&w) and convert it to 8 bit, what do you get? Correct, only black and white, nothing in between. If you apply logic, the same result will happen with an 8bit converted to 16bit.

I've written more about this topic here (scroll down, the English part is at the bottom): https://toyotadesigner.wordpress.com/category/photography/color-farbe/
 

hsandler

Subscriber
Joined
Oct 2, 2010
Messages
471
Location
Ottawa, Canada
Format
Multi Format
Jens, There may be value in converting from 8 to 16 bits after scanning. The extra bits will initially be all zeros, e.g. a level of 128 becomes 128.000, but as soon as you start doing a lot of image manipulation, the extra bits reduce the accumulation of successive rounding errors which would otherwise add up to more than one level. For example, if you apply a curve to the image, some shadows/highlights, and then maybe downsample to a lower resolution, the comb-like histogram may tend to become smooth and filled in when doing all this in a 16-bit space, compared to doing the same thing in an 8-bit space.

The test of this would be to take an image scanned at 8 bits per colour, which displays no banding when printed as-is. Then, keeping photoshop in an 8-bit mode, do a lot of manipulation steps. Do the same manipulations to another copy, but with photoshop in 16-bit mode. Print the two versions and see if the 16-bit version looks different. A further experiment would be to re-scan at 16-bits, do the manipulations at 16-bits and compare to the version scanned at 8-bits but manipulated at 16 bits. There may or may not be a difference; it depends if the initial 8-bit scan was a source of artifacts or not.

On an image that already looks better when scanned at 16-bits, compared to scanning at 8-bits; i.e. you can see banding even before doing any image manipulation, it's clear there is value in scanning at the higher bit depth. No one could argue with that.
 
Last edited by a moderator:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom