Silverfast 8: 16bit grey scale?

Brentwood Kebab!

A
Brentwood Kebab!

  • 0
  • 0
  • 16
Summer Lady

A
Summer Lady

  • 0
  • 0
  • 20
DINO Acting Up !

A
DINO Acting Up !

  • 0
  • 0
  • 15
What Have They Seen?

A
What Have They Seen?

  • 0
  • 0
  • 23
Lady With Attitude !

A
Lady With Attitude !

  • 0
  • 0
  • 23

Recent Classifieds

Forum statistics

Threads
198,756
Messages
2,780,491
Members
99,699
Latest member
miloss
Recent bookmarks
0

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
It's generally not a great idea to do dynamic range or noise analysis on image data that has been conformed to a color space as you have to contend with the gamma encoding and the raw to colorspace transform and what effect that has on the resulting image samples.

So...

http://m.avcdn.com/sfl/epson_v850_raw_black_frames.zip

This contains two files: raw0001.tif and raw0002.tif. Both are a ~36x24mm black frame at 6400 dpi from the center of a completely blocked off capture area (except the scanner calibration area). Vuescan was used to capture and output the raw data with all the controls that vuescan can control (analog gain, etc) zeroed out and locked so they are exactly the same between scans with the only difference being raw0001.tif was captured in 16 bit mode and raw0002.tif was captured in 8 bit mode immediately after raw0001.tif.

Anybody can download these raw files and do whatever analysis they want on them.

I've done a very basic and preliminary look using some tooling that I use for my own internal software. The 8 bit samples were scaled up to 16 bit samples to make the comparison a little easier (using sample/255, then result * 65535, all in float to preserve precision).

The per color channel numbers:

raw0001.tif (captured in 16 bits, saved in 16 bits)
resolution: 9064x6046 px
bits per sample: 16 bits
analyze window_size: +-1024, x1: 3508, x2: 5556, y1: 1999, y2: 4047
max_r:1817, max_g:1911, max_b:1714
min_r:0, min_g:0, min_b:0
avg_r:334, avg_g:350, avg_b:313

raw0002.tif (captured in 8 bits, saved to 8 bits, scaled to 16 bits for analysis)
resolution: 9064x6046 px
bits per sample: 16 bits
analyze window_size: +-1024, x1: 3508, x2: 5556, y1: 1999, y2: 4047
max_r:59904, max_g:53248, max_b:32639
min_r:0, min_g:0, min_b:0
avg_r:1699, avg_g:1666, avg_b:1644

Just for edification, the same values, but scaled to 8 bits

raw0001.tif
resolution: 9064x6046 px
bits per sample: 16 bits (scaled to 8 bits)
analyze window_size: +-1024, x1: 3508, x2: 5556, y1: 1999, y2: 4047
max_r:7, max_g:7, max_b:6
min_r:0, min_g:0, min_b:0
avg_r:1, avg_g:1, avg_b:1

raw0002.tif
resolution: 9064x6046 px
bits per sample: 16 bits (scaled to 8 bits)
analyze window_size: +-1024, x1: 3508, x2: 5556, y1: 1999, y2: 4047
max_r:233, max_g:207, max_b:127
min_r:0, min_g:0, min_b:0
avg_r:6, avg_g:6, avg_b:6

Make of it what you will, though, again, I think it's pretty obvious that 16 bit capture has a significantly lower noise floor and therefore more dynamic range than 8 bit capture. Whether that makes it through the raw to colorspace operations and gamma encoding, is harder to tell as the analysis @alanrockwood has done so far, though sound from an analysis perspective, is being done on data that's already been conformed to a color space. Also whether that lower noise floor matters, depends on what you're capturing, so 8 bit capture very well may be adequate for lower contrast materials.

Either way, the raw files are available for anybody to do their own analysis and draw their own conclusions.
I will look your results over carefully. (By the way, one difference between what you have here and what I did is that all of the scans I did were in gray scale, not color.)

One thing to consider is that according to the documentation I have read, vuescan has the option to save in raw mode, which is linear, but for 8 bit files it always saves in gamma encoded mode. (reference: https://www.hamrick.com/vuescan/html/vuesc24.htm) Why, I don't know, because the gamma transformation process always damages the quality of the data from a data integrity point of view.

Also, I contacted Hamrick a week or two ago and asked him if vuescan dithers before saving in 8 bit mode, and the answer was "no", just in case anyone is wondering about that.

Also, it's good that you did intermediate calculations in floating point mode because doing them in fixed point arithmetic can damage the quality of the data when transformations are done on the data, as you noted.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I will look your results over carefully. (By the way, one difference between what you have here and what I did is that all of the scans I did were in gray scale, not color.)

One thing to consider is that according to the documentation I have read, vuescan has the option to save in raw mode, which is linear, but for 8 bit files it always saves in gamma encoded mode. (reference: https://www.hamrick.com/vuescan/html/vuesc24.htm) Why, I don't know, because the gamma transformation process always damages the quality of the data from a data integrity point of view.

Also, I contacted Hamrick a week or two ago and asked him if vuescan dithers before saving in 8 bit mode, and the answer was "no", just in case anyone is wondering about that.

Also, it's good that you did intermediate calculations in floating point mode because doing them in fixed point arithmetic can damage the quality of the data when transformations are done on the data, as you noted.

Clipped from the above linked URL:

"Note that the raw scan files are stored in linear format when using more than 8 bits per sample, and stored in gamma 2.2 format when using only 8 bits per sample. The saved TIFF files are always gamma corrected according to the Color | Output color space used (1.8 for Apple RGB, ColorMatch RGB, ProPhoto RGB and ECI RGB and 2.2 for all other color spaces). Note that the raw scan files stored in linear format will look dark when viewed - this is normal."

It's not clear what it does if acquiring and saving at raw 8 bits. Totally understandable if acquiring at 16 bits and saving at 8 bits, but it might be worth it to double check with Hamrick. I had my output colorspace set to ProPhoto, so it'd be gamma 1.8, which means I'd expect the minimum value to be:

1/255 = 0.003921568627451, POWER(0.003921568627451, 1/1.8) = 0.046029179469295, 0.046029179469295 * 255 = 11 or 12 depending on rounding, which is higher than even the average, so I'm not sure what it's doing without a direct confirmation from Hamrick himself for acquiring 8 bit raw and saving 8 bit raw.

EDIT: Yes, all my tooling internally is 80 bit floating point. Probably overkill, but it's available, so maximum precision it is.
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
A parting question: Why are you even scanning your film to begin with?

Because we can. And because we can, we do.

All snark aside, darkrooms are far and few in between, so the only way the vast majority of people can actually see their film is by scanning it in.
 

PhilBurton

Subscriber
Joined
Oct 20, 2018
Messages
467
Location
Western USA
Format
35mm
Because we can. And because we can, we do.

All snark aside, darkrooms are far and few in between, so the only way the vast majority of people can actually see their film is by scanning it in.

And further to Adrian's point, even if i wanted to set up a chemical darkroom again, She Who Must Be Obeyed would first need a lot of convincing. Much easier to just scan (at 16 bits of course!) and then to make prints on a really nice photo inkjet printer. The same one I will need for digital native prints.

That all said, I can't wait until I'm at the point where I can start shooting B&W again. I still have my Bulk Loader and Nikon F2 bodies.
 
Joined
Aug 29, 2017
Messages
9,444
Location
New Jersey formerly NYC
Format
Multi Format
And further to Adrian's point, even if i wanted to set up a chemical darkroom again, She Who Must Be Obeyed would first need a lot of convincing. Much easier to just scan (at 16 bits of course!) and then to make prints on a really nice photo inkjet printer. The same one I will need for digital native prints.

That all said, I can't wait until I'm at the point where I can start shooting B&W again. I still have my Bulk Loader and Nikon F2 bodies.
Never mind the darkroom. Since we moved, the boss doesn't allow me to mount pictures on walls all over the house the way I use to. So I hardly print anymore. I either post to FLickr or to other photo sites. Also, I make video slide shows that I show on my 75" 4K TV which actually is pretty nice. I can add background music, titles, credits, annotations, etc and short video clips as well as stills, which makes shows, especially of trips and vacations, more interesting. I was thinking of getting one of those TV's that looks like a frame that sequences through pictures when you;re not looking at it as a TV.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
This is just a quick note to let folks know that I have not lost interest in this topic. In fact, I have been hard at work doing calculations and measurements, though with some interruptions. You know how life gets in the way sometimes.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I am going to break my response into several separate responses, partly because discussing everything in a single post would make the post too long and unfocused. Already this post, which is on a single topic, is rather long.

Where to start? Maybe I will start with gamma, not because it is the most important topic but because it is the most distracting one.

First a comment about gamma's purpose: It is designed to compensate for the fact that the response of a CRT display (old technology, though I am actually using a CRT monitor as I write this) is a nonlinear function of the input. Most detectors have a linear response, which implies that you can't take a signal directly from a detector and apply it to a CRT and have it look right. Therefore, the signal is intentionally distorted just enough to undue the distortion the CRT does to the signal.

The use of gamma has been carried over from the old analog days to the digital world. In other words, if we were to take take a signal from a device with a linear response, digitize it, store the result in a computer as an integer number, then retrieve it, convert it back to an analog signal, and then apply it to a CRT monitor the image wouldn't look right. Therefore, after being digitized a signal often has a gamma encoding operation applied to it, so that when it is converted back to an analog signal it can be applied directly to the input of the CRT without further processing and the image will look right. In principle the gamma encoding could happen somewhere else in the signal processing chain, such as after retrieval from memory but before conversion to an analog signal, or even after conversion to an analog signal but before being applied to the CRT, but I believe the current convention is to apply the operation before the signal is stored in memory.

Modern displays have a linear response, so any gamma encoded data must be converted back to a linear one before being applied to the display device. However, we are are stuck with gamma for legacy reasons.

Next, gamma encoding (in a digital world) is a lossy process, meaning that after a signal is gamma encoded the process can't be inverted to restore the original signal without error. This is the case because a gamma encoded signal has to be rounded to the nearest integer in order to be stored in a digital word of finite length. This wouldn't be so bad except for the fact that gamma encoding produces an ambiguous output. For example, if one starts with a 16 bit word, and if the gamma encoded result is also to be stored in a 16 bit word, then at least two different inputs will produce the same output. For example, A simple gamma encoding scheme resulting in a 16 bit word might look like this:

X_gamma_float = 65535*(X^(1/gamma))/(65535^(1/gamma))

followed by

X_gamma_integer=round(X_gamma_float).

The first result (X_gamma_float) is an intermediate and temporary result. It is the last result (X_gamma_integer) that would be saved as a 16 bit result. Two different inputs can sometimes produce the same output. For example, 31596 and 31597, produce the same gamma encoded output if stored as a 16 bit integer number, 47039 in this case. There is no way to untangle the ambiguity. It actually turns out that it is seldom if ever important that gamma encoding is a lossy process, but people should at least be aware of the fact that it is.

Next, the process of gamma encoding is not, in an of itself, a process that increases the dynamic range. For example, if a 16 bit signal is gamma encoded and stored in a 16 bit word then there is no increase in the maximum value (as referred back to the input values) in the 16 bit word, nor is there a decrease in the minimum signal that can be stored. Therefore, there is no increase in dynamic range.

Next, the gamma encoding process is not, in and of itself, a compression scheme. For example, if a 16 bit word is gamma encoded and stored as a 16 bit word then an image takes the same number of bytes to store compared to just storing the original signal. I will come back to this issue of compression shortly.

Next, because two inputs can sometimes produce the same output upon gamma encoding it actually might provide the theoretical possibility of using gamma encoding as part of a compression scheme. To use the example used previously, if every pair of different input values would produce only one output value for the pair, then in principle the outputs could be stored in a word with one fewer bits. I am ignoring at least one subtle effect that might make this not quite true in real life, but let's just ignore that and assume that a 16 bit input could be stored as a 15 bit output.

Compression is also possible if one were to gamma encode a 16 bit word and store the result in a smaller word length, such as 8 bits. This would involve additional rounding errors, making the process even more lossy. Whether this actually matters in real life is not something I will discuss in this post, but I do want to point out that it does provide at least one one mechanism for compression, assuming the damage to the data is not too severe. However, one question about this scheme is whether or not it would be better just to round the original 16 bit value to 8 bits rather than gamma encoding it and rounding the result to 8 bits. I will not discuss that in this post either.

Next, gamma encoding is not directly related to how the eye perceives a signal, which is light coming from a scene. Our eyes live in a world of linear signal space, and whether or not the eye/brain system responds to that linear signal in a nonlinear way, if we were to gamma encode the scene in linear signal space and then present it to the eye the scene would look funny because our eye expects the scene will be presented to it in linear signal space, not in gamma encoded signal space. Gamma encoding is actually only relevant to this if it provide a scheme for data compression, and then there are several issue to contend with, which I won't go into in this post.

Next there is the question of when is gamma really gamma? As typically implemented gamma encoding is only approximately related to a simple power law transformation. I won't get into that issue in this post, and maybe never, except to note that the issue exists. It's probably not actually very important anyway.

Next, there is the question of how noise plays into a system where gamma encoding is taking place. In particular, does gamma encoding greatly effect the signal to noise ratio. I won't get into that question in this post.

Issues related to gamma will sneak back into the discussion through the back door in later posts. In the long run gamma encoding will turn out to be not very important. However, for now the core message is that gamma encoding is a legacy scheme that comes from the fact that CRT monitors are non-linear devices and is not directly related to a number of other issues to which it is sometimes somewhat mistakenly applied.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Now I will discuss the how the digitization of a noisy signal is affected by presence of noise and the bit depth of the word used to store the results. To make the discussion more concrete I will assume that the input signal has a maximum value of 255 mV and that the storage word is matched to the input signal. For a 16 bit word stored in linear form this would mean that a value of 65535 on the ADC would correspond to 255 mV at the input, an input of 1 mV would give an ADC output of 257, and the ADC step size corresponds to ~3.891microvolts referred to the input. For an 8 bit word stored in linear form an ADC value 255 would correspond to 255 mV at the input and the ADC step size is 1 mV. I will also assume that the digitizer is single-ended, meaning that if the input is less than zero Volts then the ADC output is zero.

In the simulation Gaussian noise will be add to nominal signal levels. I sampled 99,999 unique points from a Gaussian distribution. The mean of the sampled distribution is zero so that when noise is added to the simulated signal it does not alter then mean of the simulated signal.

Let us start with a simulated noise signal with a standard deviation of 1 mV. This graph shows the mean ADC output averaged over of all 99,999 noisy input values as a function of the noise free input. For the y-axis the ADC outputs are converted to mV equivalents. Both 8 bit and 16 bit simulations are included on the graph. The calculated points are shown as closed circles, and they are connected in the figure by smooth lines.

response of ADC to signal.jpg


The first thing to note is that the 8 bit and 16 bit results overlay each other almost perfectly, with the difference being so slight that it cannot be distinguished on the graph. The second thing to note is that at low signal levels the graph is non-linear, so that at an input of 0 volts the average ADC output is non-zero, averaging ~0.382 for the 8 bit result and 0.399 for the the 16 bit result. The reason for the non-linearity and the non-zero intercept is that the average ADC output is biased positive because the ADC is single ended, so negative inputs get reflected as zero ouputs rather than negative numbers. The non-linearity only occurs at low input values. It is only slightly non-linear for inputs near 1 mV, and for all intents and purposes the non-linearity is gone at an input of 2mV. For comparison, the a dotted line on the graph indicates perfect linearity.

Also note the complete absence of any indication of a stair step pattern, which would show up, for example, at input values of 2.25 and 2.75 volts if stair stepping were happening. This implies that there is no possibility of banding in an image under extreme image manipulation, provided that a proper workflow is followed. Because of the virtual equivalence in the outputs for the 8 bit and 16 bit digitizers, if a difference in signal to noise ratio is to be observed one will have to look elsewhere for it, such as in the standard deviation at the output.

Next we plot the output noise as a function of input voltage.

noise ouput as function of signal.jpg


At high values of input voltage the noise at the output is constant, with the 8 bit results being ~4% noiser than the 16 bit results. The noise begins to drop at low input signals, beginning at ~2mV input. The reason for the drop can again be traced to the single-ended nature of the digitizers.

From the results reflected in the two previous graphs one can calculate signal-to-noise ratio. Here's a graph of the signal to noise ratios.

sgnal to noise ratio.jpg


Performance is slightly better for the 16 bit ADC than for the 8 bit ADC, but not much better. We can take the point where the curves cross a signal-to-noise ratio of 1 as a basis for a dynamic range calculation. For the 16 bit results the crossing point occurs at an input voltage of ~0.6043 Volts, and for the 8 bit results the crossing point occurs at an input voltage of ~0.7409 Volts. From these one can calculate a dynamic range of 344 for the 8 bit ADC and 422 for the 16 bit ADC, an improvement of about 23% at the cost of doubling the storage requirements Whether a difference in dynamic range of 23% is enough to show up as being visible in a pictorial image is not known to me, but I strongly suspect that such a small difference is unlikely to be noticed, except perhaps in a very carefully scrutinized side by side comparison. This difference is roughly comparable to the difference in grain between Acros 100 and Tmax 100.

Now, another lesson from this is that if the input noise is inherent in the instrument (e.g. sensor noise, thermal noise in the amplifier circuitry, etc.) is large enough to be very noticeable in 8 bit mode then going to 16 mode will result in, at most, only a relatively minor decrease in noise. For the most part all the extra ADC bits are doing is to provide a more accurate characterization of the noise and a rather slight reduction in the output noise level, a 4% difference in the example calculated here. I am pretty sure that, aside from signal levels near zero, this small difference would not be noticeable.

Related to this is that the improvement in dynamic range (23% for the parameters used in this simulation) in going from 8 bits to 16 bits is very modest, certainly not commensurate with the naive expectation one might assume in going from an 8 bit to a 16 bit digitizer. In other words, a factor of 1.23 greater dynamic range is a lot less than the naive expectation of a factor of 257 expansion of dynamic range.

The next thing to note is that the inherent noise provides an absolute limit on the noise performance of the system, independent of any additional noise that might show up, such as film grain. In other words, it can only get worse for real scans as opposed to scans that are blanked of to result in zero signal input. I am pretty sure that in real scans using good scanners it will be film grain that sets the limit, and to my knowledge it has yet to be demonstrated that the noise level for scans of real films is different for 8 bit vs. 16 bit scans. As I mentioned in other posts, I am prepared to be proven wrong by real experimental results with real film scans. (The scans of step tablets by Adrian may require special consideration on this point.)

Something not shown in this post but that I plan to show in later posts showing simulations with higher noise levels is that if the noise level is much more than 1 step size of an 8 bit ADC then the difference between an 8 bit ADC and a 16 bit ADC is completely obscured by the noise, and the dynamic range will be identical for all intents and purposes. Another consequence of more noise is that the curves become more non-linear at low signal levels. In that case dynamic range is completely determined by noise, not by ADC bit depth.

So, if (as in this example using some specific parameters) the dynamic range at the single pixel level is only a few hundred, how is it that we can see a far higher dynamic range on an instrument such as a densitometer? The secret to this is signal averaging. The effective sampling footprint for a single pixel on a 4000 dpi scanner has a linear dimension of about 6 microns. The sampling size of the light spot of a densitometer would be something of the order of 3 mm. Due to signal averaging (ie. averaging over a much larger spot of film in a densitometer reading compared to a single pixel in a scanned image), the noise level scales with 1/square_root(area), or equivalently the noise varies inversely with the change in linear dimension. Therefore, the noise that shows up in single pixel of a scan with a good scanner will be about 500 times more than the noise in a densitometer reading. (Here I am now assuming that the noise comes from film grain.) Thus, a dynamic range of ~400 when referred to a single pixel from a scan becomes ~200,000 when read by a densitometer. That's a dynamic range of about 5.3 density units, which I think is actually more than the practical density range that most densitometers are capable of detecting.

I'm sure that everyone has noticed that I haven't yet commented on how gamma encoding affects these results, nor have I commented on Adrian's scans of dark area taken with the Epson V/850. If all goes according to plan I will discuss those in later posts. I also hope to expand the discussion presented in this post to address different noise levels.
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
This is all great information. Maybe it would be better if posted in the resources section so it's easier to refer to and find, but otherwise, a great explanation for the uninitiated.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
This is all great information. Maybe it would be better if posted in the resources section so it's easier to refer to and find, but otherwise, a great explanation for the uninitiated.
Not a bad idea, although I will continue to post here for the time being in order to try to keep the information together in one place.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
The next series of three curves are similar to those in my previous post that contained graphs, except that for these calculations the noise at input was twice as high at 2 mV rather than 1 mV.

response of ADC to input.jpg



noise output as a function of signal.jpg


signal to noise ratio.jpg


The curve shapes are similar for this set, but the numbers are different. For the ADC output as a function of input the deviation from linearity starts at ~4 mV rather than 2 mV, and the y intercept is twice as high, at ~79 mV.

For noise as a function of input, for the 16 bit ADC the standard deviation of the ADC output approached 2 mV at high input voltages, and the output noise of the 8 bit ADC is about ~1% higher. (Recall that with a 1 mV input the output of the 8 bit ADC was ~4% noisier than the output of the 16 bit ADC.) The deviation from the constant noise started at ~4 mV, a factor or 2 higher than was the case for 1 mV noise input.

The curves for signal-to-noise ratio are also closer together than they were for 1mV noise input. The crossing points for the curves for a signal-to-noise ratio of 1 occurred at higher input voltages than they did for the 1 mV noise calculations.

The calculated dynamic ranges were 198.2 for the 8 bit ADC and 209.8 for the 16 bit ADC, a difference of only 5.5%. This difference was about 4 times less than was the case for 1mV noise input, and is highly unlikely that a scene captured with an 8 bit digitizer would be discernible from one captured with a 16 bit digitizer, at least not by visual inspection.

The salient point of this calculation is that as the input noise increases the differences between the 8 bit and 16 bit ADCs, including the differences in dynamic range, decrease rapidly toward zero, although the overall noise performance gets worse, as expected.

At this level of noise input (~0.8% of full scale ADC input) there is no practical difference between an 8 bit digitizer and a 16 bit digitizer.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
In previous posts I showed graphs summarizing the theoretical performance of 8 bit digitizers vs. 16 bit digitizers in the presence of ~0.8% noise and ~0.4% noise relative to full scale of a digitizer. The quantities calculated were average ADC output, noise at ADC output (as measured by standard deviation), signal-to-noise ratio, and dynamic range.

In simple terms, at ~0.8% noise there was no practical difference between an 8 bit ADC and a 16 bit ADC, and at ~0.4% noise there was a relatively small difference, possibly even an unnoticeable difference in pictorial imaging applications.

Now I will give results for lower input noise levels of ~0.2% and ~0.1%. Strictly speaking, the simulations were for 0.5 mV and 0.25 mV of noise if full scale for the ADC was was 255 mV. Therefore 0.2% and 0.1% are just close approximations of the parameters I used. The actual values of noise and full scale input are not actually relevant, provided that they scale accordingly. It is the relative values that matter, not the absolute values.

First, here are the graphs for ~0.2% input noise.

response of ADC to sgnal.jpg


noise performance.jpg


signal over noise.jpg


A few comments: for the results at ~0.2% input noise the average output values of the 8 bit and 16 bit results overlay very closely, only becoming just noticeably separated at near zero input. As expected, the curve become non-linear at very low signal levels, with only small differences between the curves. Both curves are very smooth and conform closely to a linear result except at very low signal levels. (There is actually a very slight amount of waviness in the the 8 bit curves, but it is a hard to see in the graphs and would not be at all noticeable in a pictorial image.)

The noise levels on the 0.2% graphs are noticeably higher for the 8 bit results. Whether this noise difference would be noticeable in actual photographs is open to question, and I don't know the answer. The signal/noise ratio is also noticeably higher for the 8 bit results, and the dynamic range is 77% higher for the 16 bit results (dynamic range = 869) than the 8 bit results (dynamic range =492.)

One can conclude that if the noise level is 0.2% an 8 bit digitizer is probably adequate for non-critical work, but a 16 bit digitizer would give somewhat better results because the noise level is somewhat lower and the dynamic range is somewhat higher. It might even be adequate for relatively critical work, even if not at the very highest possible quality level. It really comes down to a judgment call. As a side note, banding will not be noticeable with either 8 bit or 16 bit digitizer, provided a proper workflow is followed in any post-scan manipulations.

Next are the graphs for ~0.1% input noise.

response of ADC to signal.jpg


noise performance.jpg



signal over noise.jpg



This is where things start to get really interesting. For the 8 bit digitizer the average ADC output curve is no longer mostly linear. Instead it is a wavy curve that tends to follow the 16 bit line but with significant deviations. This is not a stair step pattern, but it is tending toward a stair step pattern. This would might show up in visual examination of scenes with smooth gradients, not as true posterization, but possibly as a disturbing lack of truly smooth in the gradients. I don't know if one would actually notice this, but I suspect it would be noticeable. (This differs somewhat from what I thought before I did calculations at this level.)

Even more noticeable in the graphs are the wild oscillations in the output noise as the input is varied. I think this would likely be noticeable as noise variations in different regions of smooth gradients.

These variations are also very noticeable in the signal/noise graphs. Given these wild oscillations it is unclear just how meaningful a dynamic range calculation would be, but one thing is clear, it would be a lot better for a 16 bit digitizer than for an 8 bit digitizer. Formally I calculate dynamic range of 510 for the 8 bit ADC and 1806 for the 16 bit digitizer. Thus, if the input noise is as low as 0.1% then an 8 bit digitizer might be OK for very rough work, such as generating proofs, but not for quality work. It is notable however that the dynamic range differed by only a factor of 3.5, not the factor of 257 which would be calculated just on the basis of the bit depth of a 16 bit digitizer vs. an 8 bit digitizer.

As a reminder, these simulations cover all forms of noise, both inherent noise in the scanner and grain in the film. It can be shown that different noises sources, as long as they are not correlated with each other, will combine as the square root of the sum of the squares. This is a rigorous result, and it doesn't depend on the type of noise, as long as a noise source can (technically speaking) be characterized as having a standard deviation. For example, if inherent scanner noise were 0.5 mV (on a 255 mV scale), and film grain noise were equivalent to 1.0 mV, then in combination the noise would be would be 1.12 mV. Film grain would make the overwhelming contribution to the combined noise, and the smaller noise source could almost be ignored. The reverse is also true. If the inherent noise were twice that of the grain noise then in combination one could almost ignore the grain noise.

A sample calculation may be useful. The grain for Tmax is specified as 7. That means that if sampled with a 48 micron spot size and a density of 1 the standard deviation in density is 0.007 density units. In terms of how much light is transmitted to the detector (which is what the sensor really responds to) it would average 0.1 of the maximum amount of light, but over a range of 0.0984 to 0.1016, where those two limits are the single sigma limits. If the full range of the ADC at 100% transmission were 255 mV then at a density of 1 the transmission would vary over a minus to plus one sigma range of 0.816 mV, which equates to a standard deviation of 0.408 mV or 0.16% of full scale. However, a scanner uses a much smaller effective sampling size than 48 micron. For example, the effective sampling size for a 4000 dpi scanner is 6.35 microns. Therefore, the standard deviation of the noise seen by a scanner pixel would be 0.16% multiplied by 48 divided by 6.35 or 1.2% of full scale. This is significantly higher noise than the maximum noise in any of the simulations I did (0.78% of full scale), which was already well into the range where it didn't really matter if one used an 8 bit or 16 bit digitizer. I won't try to extend this calculation to denser regions of a Tmax negative, at least not at this time, nor will I try to extend the calculation to reversal film, but I think it at least shows that film grain noise for a very fine-grain film at a certain reasonable film density level is in a range where the digitizer word length doesn't matter. I think that to to extend this analysis to other density levels and film types would best be done by direct experiment.

Has anyone produced a truly grain free scan of a film using high quality scanner with the work saved in 8 bit mode? If so, what was the noise level relative to the full bit depth? Was it greater than 2 ADC steps of an 8 bit digitizer? If so then it doesn't matter if you use an 8 bit or 16 bit digitizer. Even at 1 bit of noise it won't matter much, and you might even get away with a half bit of noise for less critical work.

Lest there be any misunderstanding, I need to repeat something which I have already repeated many times. I not saying that there is anything wrong with saving scans in 16 bit mode, other than doubling the amount of disk space used for your scans. But what I am saying is that if there is even a relatively small amount of noise in your scans (including but not limited to grain) then there is nothing to be gained by saving in 16 bit mode.

On the topic of grain in scans, has anyone noticed how frequently people cite the ability to see grain in their scans as an indicator that a scanner is pulling out all of the useful information from the scan? I have. This is actually not quite a valid criterion because grain typically starts showing before all of the useful information is extracted from a piece of film, but it is nevertheless a very widely quoted rough criterion for evaluating the quality of a scanner.
 

Alan Johnson

Subscriber
Joined
Nov 16, 2004
Messages
3,270
Has anyone produced a truly grain free scan of a film using high quality scanner with the work saved in 8 bit mode? If so, what was the noise level relative to the full bit depth? Was it greater than 2 ADC steps of an 8 bit digitizer? If so then it doesn't matter if you use an 8 bit or 16 bit digitizer. Even at 1 bit of noise it won't matter much, and you might even get away with a half bit of noise for less critical work.
Not sure if this meets your requirements, it was scanned on a Plustek 8100 [Reflecta make a better scanner] in 48 ->24 bit color then processed in Photoshop Elements which I believe saves in 8 bit after converting with silver efex to B/W. Click twice to get the full size which is 3.2 x the width of a 1920 screen. The sky is near grain free with Adox CMS 20 II.
https://www.flickr.com/photos/98816417@N08/51577870188/
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
A sample calculation may be useful. The grain for Tmax is specified as 7. That means that if sampled with a 48 micron spot size and a density of 1 the standard deviation in density is 0.007 density units. In terms of how much light is transmitted to the detector (which is what the sensor really responds to) it would average 0.1 of the maximum amount of light, but over a range of 0.0984 to 0.1016, where those two limits are the single sigma limits. If the full range of the ADC at 100% transmission were 255 mV then at a density of 1 the transmission would vary over a minus to plus one sigma range of 0.816 mV, which equates to a standard deviation of 0.408 mV or 0.16% of full scale. However, a scanner uses a much smaller effective sampling size than 48 micron. For example, the effective sampling size for a 4000 dpi scanner is 6.35 microns. Therefore, the standard deviation of the noise seen by a scanner pixel would be 0.16% multiplied by 48 divided by 6.35 or 1.2% of full scale. This is significantly higher noise than the maximum noise in any of the simulations I did (0.78% of full scale), which was already well into the range where it didn't really matter if one used an 8 bit or 16 bit digitizer. I won't try to extend this calculation to denser regions of a Tmax negative, at least not at this time, nor will I try to extend the calculation to reversal film, but I think it at least shows that film grain noise for a very fine-grain film at a certain reasonable film density level is in a range where the digitizer word length doesn't matter. I think that to to extend this analysis to other density levels and film types would best be done by direct experiment.

I think it’s important to point out that while this is not invalid, it’s also not entirely accurate as a lot of, if not most ADC chips I’m aware of operate more like 1.5v, 3.3v, 5v, or higher full scale for their input and typically noise levels are down in the mv (or less) range. It’s pretty easy to characterize things as 8 bits is all you need when your full scale is so small that half a mv is enough noise to obviate the need for 16 bits, but if your full scale is 1.5, 3.3 or 5 volts and your noise levels are 1 or 2 mv, it’s a different story. I just thought it’s important to point that out. It’s useful to have a scale that can demonstrate the concept like what you’re doing here, but people should also be aware that the scale you’re using does not necessarily reflect the voltage levels of actual applications. I don’t know what the voltage range is of Epson’s ADCs, but I’d have a difficult time believing they were less than a volt simply because the larger your full scale voltage is, the easier it is to have a clean analog front end, relative to the full scale voltage level. Obviously, depending on the application, the analog full scale voltage may have an upper limit due to other factors (or physics), but generally speaking, it behooves the manufacturer to go with as large of an input voltage as they can reasonably get as it makes it a lot easier to get better noise performance relative to the full scale voltage levels.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Not sure if this meets your requirements, it was scanned on a Plustek 8100 [Reflecta make a better scanner] in 48 ->24 bit color then processed in Photoshop Elements which I believe saves in 8 bit after converting with silver efex to B/W. Click twice to get the full size which is 3.2 x the width of a 1920 screen. The sky is near grain free with Adox CMS 20 II.
https://www.flickr.com/photos/98816417@N08/51577870188/
Interesting. When I zoom in I think I can see a little bit of grain. It doesn't actually require very much grain to be enough to make 8 bit storage a reasonable option. Can you generate a histogram of a small featureless patch of the sky?
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I think it’s important to point out that while this is not invalid, it’s also not entirely accurate as a lot of, if not most ADC chips I’m aware of operate more like 1.5v, 3.3v, 5v, or higher full scale for their input and typically noise levels are down in the mv (or less) range. It’s pretty easy to characterize things as 8 bits is all you need when your full scale is so small that half a mv is enough noise to obviate the need for 16 bits, but if your full scale is 1.5, 3.3 or 5 volts and your noise levels are 1 or 2 mv, it’s a different story. I just thought it’s important to point that out. It’s useful to have a scale that can demonstrate the concept like what you’re doing here, but people should also be aware that the scale you’re using does not necessarily reflect the voltage levels of actual applications. I don’t know what the voltage range is of Epson’s ADCs, but I’d have a difficult time believing they were less than a volt simply because the larger your full scale voltage is, the easier it is to have a clean analog front end, relative to the full scale voltage level. Obviously, depending on the application, the analog full scale voltage may have an upper limit due to other factors (or physics), but generally speaking, it behooves the manufacturer to go with as large of an input voltage as they can reasonably get as it makes it a lot easier to get better noise performance relative to the full scale voltage levels.

True, but also consider that to get to several volts you would ordinarily need to amplify the signal, and in that case that limiting thing as far as noise is concerned is the small signal noise level before amplification.

Also, I picked 255 mV full scale as an arbitrary choice because it conveniently gives one mV per step. I could have chosen 2.55 volts, and then the step size would be 10 mV.

In any case, the point of your post has to do with noise levels and it is well taken, but it then leads to the question of what are the electronic noise levels in the instruments.


And then there is the question of film grain.
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
True, but also consider that to get to several volts you would ordinarily need to amplify the signal, and in that case that limiting thing as far as noise is concerned is the small signal noise level before amplification.

Also, I picked 255 mV full scale as an arbitrary choice because it conveniently gives one mV per step. I could have chosen 2.55 volts, and then the step size would be 10 mV.

In any case, the point of your post has to do with noise levels and it is well taken, but it then leads to the question of what are the electronic noise levels in the instruments.


And then there is the question of film grain.

For sure, and I'm not intending to diminish that, as it's very useful information, and yes, the 255mv range was a good pick for demonstration purposes, but sometimes people who aren't as familiar with this sort of stuff sees info like this and assumes that that's the voltage level things run at, so it's important to spell out that out in the real world often the full scale voltage range is a lot bigger than the voltage levels you're using here.

EDIT: with real world numbers, with a relatively common 5v full scale input range (5000mv), an 8 bit ADC would have a step size of ~19.6mv and a 16 bit ADC would have a step size of ~0.076mv. Even with signal amplification, it's not that hard to get the noise levels down to a lot less than 19 mv if the original input signal voltage is anything reasonable. Less than a mv noise performance would likely be some pretty high performance analog circuitry, but less than basically 20mv is pretty doable. Heck, in audio land, 24 bit ADCs are pretty common for that exact reason. Even with an amplified input signal it's not uncommon to have an input noise floor significantly lower than what you'd get with a 16 bit ADC, which is why a 24 bit ADC is pretty common in that arena. The point being, if very low noise levels where really much of an issue, there'd be little point in going much more than 8 bit. The reality is, more often than not, getting noise levels way down generally isn't a problem and higher bit depth ADCs generally net more overall dynamic range. How much dynamic range depends on the application, and often times there's a tradeoff between application specific noise levels, cost, and what bit depth ADC the manufacturer goes with.

Again, not to take away from what Alan has been posting as it's very good information in explaining things, but I think it's important to note more "real world" numbers for context. We don't know what Epson's internal specs are so all we can do is make some informed guesses based on poking around and running what tests we can.
 
Last edited:

Alan Johnson

Subscriber
Joined
Nov 16, 2004
Messages
3,270
Interesting. When I zoom in I think I can see a little bit of grain. It doesn't actually require very much grain to be enough to make 8 bit storage a reasonable option. Can you generate a histogram of a small featureless patch of the sky?
Here is the histogram from a 1mm square patch of sky fro the pic linked in post 138. This is after converting the scan of the 1mm sq to B/W using the same processing as with the original linked.
When the scanner is zoomed in on auto setting to select 1mm square it automatically re-sets the black and white points so the apparent contrast of this 1mm square of sky is greatly increased.
It looks like there is some grain detectable when this is done, as shown in the other pic.
I do not know the significance of this, if any.


8 bit test-1.jpg
8 bit test 1-1.jpg
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Here is the histogram from a 1mm square patch of sky fro the pic linked in post 138. This is after converting the scan of the 1mm sq to B/W using the same processing as with the original linked.
When the scanner is zoomed in on auto setting to select 1mm square it automatically re-sets the black and white points so the apparent contrast of this 1mm square of sky is greatly increased.
It looks like there is some grain detectable when this is done, as shown in the other pic.
I do not know the significance of this, if any.


View attachment 301462 View attachment 301463
On the scale of the effects I have been discussing that's a huge amount of noise. Presumably it's coming from film grain. At that level of noise there is no point in saving the scans in 16 bit mode except possibly for one subtle point, and that is what would happen in the darkest part of the negative. That's what would determine dynamic range, but in terms of the part of the scene representing the sky the noise level in this scan is high enough that it would suppress the possibility of banding under extreme image manipulation. To be sure that banding would not happen it would be best to convert an 8 bit scan to 16 bits before doing the image manipulation.

One other comment. It looks like the sky is not uniform, even in the 1mm patch, and that is part of the reason why the histogram is so broad, but there is enough grain that even if the sky was completely uniform it would still be enough grain to suppress banding.

Repeating a point made above, to determine the effect of noise on dynamic range would require sampling in the darkest part of a negative that includes a region of maximum possible density. The chances are that there isn't a patch of negative in that scene that would qualify, but if so then you could also zoom in on that part of the scene and look at the histogram.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I should make a comment about the properties of grain noise vs. inherent noise. Grain noise can never produce a negative signal to the digitizer input. Electronic noise can present a negative signal to the input of the digitizer. This will have some effect on the shape of the curves at very low signal level, but not at higher signal levels. However, the main point is still valid, which are that if the noise (be it grain or electronic noise) is of the order of or greater than the ADC step size of an 8 bit digitizer then there is little to nothing gained by digitizing with a greater bit depth.
 
Last edited:
Joined
Aug 29, 2017
Messages
9,444
Location
New Jersey formerly NYC
Format
Multi Format
On the scale of the effects I have been discussing that's a huge amount of noise. Presumably it's coming from film grain. At that level of noise there is no point in saving the scans in 16 bit mode except possibly for one subtle point, and that is what would happen in the darkest part of the negative. That's what would determine dynamic range, but in terms of the part of the scene representing the sky the noise level in this scan is high enough that it would suppress the possibility of banding under extreme image manipulation. To be sure that banding would not happen it would be best to convert an 8 bit scan to 16 bits before doing the image manipulation.

One other comment. It looks like the sky is not uniform, even in the 1mm patch, and that is part of the reason why the histogram is so broad, but there is enough grain that even if the sky was completely uniform it would still be enough grain to suppress banding.

Repeating a point made above, to determine the effect of noise on dynamic range would require sampling in the darkest part of a negative that includes a region of maximum possible density. The chances are that there isn't a patch of negative in that scene that would qualify, but if so then you could also zoom in on that part of the scene and look at the histogram.

Alan, If we have to convert the image to 16 bits before doing the image manipulation, we won't save anything on memory. I've lost track if there are other advantages to scanning at 8 bits.
 

gone

Member
Joined
Jun 14, 2009
Messages
5,504
Location
gone
Format
Medium Format
Wow. I started reading on this page instead of at the start of the thread, and don't even try it. You folks are into something I never ever dreamed of attempting.

In my scan and inkjet print days I went with BO printing, so there were no printing curves, no gamma, and just the one black ink on paper. Test prints were made to determine what worked best, similar to a darkroom. The darkroom is easier in a sense, I spent too much time on a computer making inkjets.

I still proof scan because 35mm negs are little, and my BO prints on art paper look more like "art" prints, w/ deep blacks, and shadows similar to lithographs, etchings or other types of fine art prints. Probably because fine art B&W printing is done only w/ black ink on paper. These BO prints look totally different than my straight, non manipulated darkroom prints.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Alan, If we have to convert the image to 16 bits before doing the image manipulation, we won't save anything on memory. I've lost track if there are other advantages to scanning at 8 bits.
Let us suppose you have scanned 10,000 color images and saved them as 16 bit images. Further suppose these were 35mm images scanned at 4000 dpi. That's ~128 MB per image or ~1.28 TB of storage for the set. You are going to keep those images as an archive, unmodified. Scanning in 8 bits would save you ~0.64 TB of storage.

Now what happens when you start manipulating the images. You will be doing this one image at a time, so during the manipulation phase you are really affecting the archival storage requirement, even if you do a temporary save of the image you are manipulating. (I will assume you will be doing image manipulation in 16 bits regardless of the original bit depth of the archival images.)

Once you are completely finished with the manipulation (not saving the temporary intermediate results you made during manipulation) you have the choice of saving the results in either 8 bit or 16 bit format. It is generally accepted that once the image is in final form there is no perceptible difference between a 8 bit and sixteen bit image. Some output devices can't even deal directly with 16 bits.

Anyway, if you start with a 16 bit archival images, process them one at a time, and save them as 16 bit images in their final form (not saving the temporary intermediate results) then your total storage requirement is 2.56 TB.

On the other hand, if you start with 8 bit archival images, and do exactly the same thing (manipulating them in 16 bit form, but not saving the intermediate images permanently), and save them as 8 bit images in their final form you will require 1.28 TB, for a savings of 1.28 TB. The difference in storage may or may not make a difference to you. That's a personal decision.

Even if you start with 8 bit and save as 16 bit in final format to preserve the possibility that you might want to do further manipulations later, even though you thought you had already reached final form, you will still save about 0.64 TB if the original archival images are in 8 bit rather than 16 bit format.

Regardless of all of this, some scanner/software combinations only allow the operator of the scan to save in 8 bit results, so in that case there is no choice. I believe the Leaf scanner is one of them when using Silverfast. Then the question becomes, does it matter that you can only save in 8 bit format? If the 8 bit images are noisy/grainy enough then it doesn't really matter that 16 bit is not available.

Zeroing in on the Leaf scanner, one of the complaints about that ancient scanner is that it's a little noisy in the dense part of the film. That automatically means that there would be no point in saving the scans in 16 bit form, even if 16 bit saves were available because the noise level of the scanner alone is sufficient to make 16 bit saves superfluous, regardless of the amount of grain in the film.

This (i.e. the inherent noise level) may or may not apply to other scanner/software combinations.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
The next topic I want to touch on is the effect of gamma encoding and how that interacts with bit depth if the input signal is noisy.

I don't want to generate full curves like the ones I already posted because it took A LOT of time to do that, so I will just concentrate on one condition, which is that the noise at the input of the digitizer is one part in 255, and I will be looking at the ADC output level at one part in 255 relative to the maximum of the ADC. To put this in terms of the same concrete example I have been using, that's 1 mV out of a maximum of 255 mV, though as I noted in other posts it's the relative numbers that matter, not the absolute numbers.

First result: Scan performed in 16 bit mode with a single ended ADC, then converted to a gamma encoded result by applying the power law equation using floating point arithmetic, then rounded off to a 16 bit integer for storage as a gamma encoded result, then converted back to linear signal space by inverting the power law equation using floating point arithmetic, then rounding of to a 16 bit integer. The standard deviation of the noise is 222.7, which on an 8 bit scale would be equivalent to 0.867. The standard deviation of the original signal stored in 16 bit linear integer format was 222.7. The two numbers only differed in the third of fourth decimal point (depending on where one chooses the rounding point.) In other words, gamma encoding was entirely irrelevant.

Second result: Scan performed in 16 bit mode in a single ended ADC, then converted to a gamma encoded result using floating point arithmetic, then rounded to a 16 bit integer, then converted back to linear signal space using floating point arithmetic, then rounded to 8 bits. The standard deviation was 0.902. The standard deviation of the original signal in 8 bit linear mode was 0.919. The gamma encoding/decoding process made less than 2% difference. That's a small enough difference that we can safely ignore it for practical purposes.

The above results were for the full sequence of gamma encoding and then gamma decoding, with rounding to integer results occurring at appropriate points of the signal processing chain. This is what would be appropriate for display on a linear device, like a modern flat screen monitor. Basically the gamma transformations made at most a negligible difference in the relative noise levels compared to just storing the information in linear format.

To save space I won't discuss what happens if the data is left in gamma-transformed form. That is only relevant to display on a CRT monitor, which most people don't use these days, though I have am a troglodyte that still sometimes uses a CRT monitor.

The bottom line is that gamma transformation isn't going to convert a signal with relatively low but non-zero noise into a signal with a lot of noise, even if performed in 8 bit mode. This was a little bit surprising to me, but only a just little.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
The next topic deals with scanner design in the signal path. I do not have access to the circuit diagrams of any scanner, or even the block diagrams of the signal path. However, I do have some experience in the realm of instrumentation design that I think is relevant, and although I am not an engineer, I have worked closely with electronic engineers in the design of instruments, including the design of the signal paths.

I am going to make what may seem to be a bold statement, but I am pretty sure it won't be contradicted by real life examples. An engineer designing a scanner will only include one ADC in the instrument. He or she will not waste his or her time and the company's money by putting in two ADCs, one of 8 bits and the other of 16 bits, because nothing would be gained by doing so, and that is because a 16 bit ADC can be used as if it were 8 bit ADC by simply rounding the results or truncating the low order bits of the 16 bit result to generate an 8 bit output. This asserted fact takes one variable out of the analysis, i.e. whether the 8 bit ADC in a two-ADC design is somehow noisier than the 16 bit ADC in the same scanner.

Furthermore, if the engineer wants to have the option for an 8 bit output then any signal processing done in the scanner it will be done in 16 bit mode, preferably using floating point arithmetic (or possibly some integer bit depth greater than 16 bit), and only converted to 8 bit mode for the final output. Otherwise the programmer would have to program two complete and separate signal processing paths when that is neither necessary nor desirable.

Thus, if the output of a scanner in 8 bit mode is noisy and the output in 16 bit mode on the same scanner is a lot less noisy it is not because of the bit depth but rather some other strange and unexplained thing must be taking place within the scanner. There can be a relatively small difference in noise level at the output of the ADC, but not a lot of difference. There are a few subtleties involved here, but the explanation would become long and convoluted and actually not very important when all is said and done.

One exception to the above analysis occurs if the inherent noise in 16 bit mode is very low. In that case the instrument designer may choose to implement a digital dithering scheme before converting to 8 bit mode. This would be done to eliminate the possibility of banding. In this case a good designer will choose about a one half bit amount of dither (one half bit based on standard deviation on an 8 bit scale) because that is enough to eliminate banding while at the same time not adding more noise than necessary. I don't know if any scanner manufacturers do this in their embedded firmware. I do know that Vuescan does not dither when saving in 8 bit mode.

This issue becomes relevant when one discusses scanner noise in a dark image.
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom