Silverfast 8: 16bit grey scale?

Brentwood Kebab!

A
Brentwood Kebab!

  • 1
  • 1
  • 76
Summer Lady

A
Summer Lady

  • 2
  • 1
  • 104
DINO Acting Up !

A
DINO Acting Up !

  • 2
  • 0
  • 59
What Have They Seen?

A
What Have They Seen?

  • 0
  • 0
  • 73
Lady With Attitude !

A
Lady With Attitude !

  • 0
  • 0
  • 60

Forum statistics

Threads
198,778
Messages
2,780,735
Members
99,703
Latest member
heartlesstwyla
Recent bookmarks
0

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Also, if the pixel footprint of the scanner is wide (e.g. a 2000 dpi scanner vs. a 4000 dpi scanner) then in principle it is necessary to use more bits in a word in order to capture the noise/grain when using the low-dpi scanner.

This comment assumes that grain is the only source of noise. If other noise sources are present and if those noise sources are are large enough to show up in an 8 bit scan then the dithering from those sources of noise will mean that 8 bits is enough to avoid banding and going to more bits (e.g. 16) doesn't really help the image quality.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Implicit in what I wrote is that the number of bits sufficient to adequately capture a signal depends on the footprint of the capture window. The footprint may be measured in time increments or area increments, depending on the application. For example, the sampling footprint could be area increments if the problem is film scanning. The sampling footprint could be time increments if one is doing audio recording. There is always an increase in noise if the sampling footprint becomes smaller. Therefore, the smaller the sampling footprint is the fewer the bits needed to capture the signal.

I spent a fair amount of my career dealing with instrumentation that acquire signal through detectors that generate pulses, sometimes electron multipliers (to detect ions in a mass spectrometer) and sometimes photomultipliers (to detect photons in an optical experiment). From the point of view of signal acquisition there's no difference between electron multipliers and photomultipliers, so let's frame most of the discussion in terms of photomultipliers unless otherwise noted.

When a photon hits a photomultiplier it generates a pulse of electrons at the output end of the photomultiplier. The pulse is typically a few billionths of a second, or even less than a billionth of a second in some devices, and the pulse typically contains something like a million electrons. (It can be more in some devices or less in others.) Those signal levels are low enough that photons can be counted individually, as long as the light flux hitting the detector is less than, let us say, about a hundred million photons per second. If the acquisition footprint is few billionths of a second then all that is needed to acquire all of the data in the signal is one bit. If the intent is to aggregate the data into larger time increments, let us say time increments of ten billionths of a second, then one can sum the individual counts, and in that case a larger word size is needed to hold the summed result. But the word size can still be pretty small if the aggregated time windows are small. This is not a contrived example. The numbers given are roughly to the scale of what one would see in a time of flight mass spectrometer, and some time of flight mass spectrometers acquire signal through a counting mode of data acquisition.

In the case of film scanning, there are actually scanners that use photomultipliers to detect light, such as drum scanners. I don't know if any of them use photon counting to acquire the signal. However, if they are doing pulse counting then at the lowest level they are effectively doing one-bit conversion of the analog signal to a digital form. This would be occurring on a nanosecond time scale. Then they would be aggregating those counts into larger time windows (which would map onto larger spatial windows on the film plane), and for those larger time windows more bits would be needed, and the bigger the aggregated time windows are the more bits are needed to hold the counts. However, at the lowest level, a single bit is all that is needed, or in other words, that's all the dynamic range that is needed.

What about higher bits when scanning film? Suppose, for example, that I wanted to build a scanner with a dynamic range of sixteen million on a linear scale or 7.2 on a log scale? I could do that. It would require a 24 bit A/D converter. Those exist. (Well, maybe I couldn't design it, but a good engineer could do it.) But it would be meaningless because most of the resolution of the A/D converter would wasted by getting highly accurate characterization of the noise. In fact for film scanning a 16 bit A/D converter is wasting bits (i.e. wasting dynamic range) by acquiring really accurate numbers for the noise (i.e. grain and other forms of noise) when that doesn't add anything at all to the pictorial information in the film. In fact, in most cases (I will say almost all cases, and probably all cases of practical interest) 8 bits is good enough to scan a black and white negative because you are already well into the noise level (i.e. grain and other forms of noise) at that point.

I demonstrated this principle already using acquired data, at least in a small way, and I explained the theoretical foundation for this. I did not cover all possible cases. I just showed a couple of them. However, I have yet to see anyone demonstrate that 8 bits are insufficient to acquire an image from a conventional black and white negative if they use the work flow I described. Would someone please do some experiments to try to demonstrate otherwise? Use Tmax 100 film or Acros because those are the films that would be most likely to show that my assertion fails. I am perfectly willing to be proven wrong.

Now, if there is a film that is the next thing to grainless and which has a very high density range (Velvia? or maybe microfilm processed in a pictorial mode?) then all bets are off. I'm not saying that 8 bits would definitely be insufficient, but only that it might not be sufficient. And even in that case it's only going to matter if one plans to do extreme image manipulation after the scan is acquired. Otherwise 8 bits is plenty because that already exceeds the tonal gradation that the eye can detect.

Lest anyone misunderstand my position, I am not saying that there is anything technically wrong with 16 bit scans, that is unless you like to waste hard disk storage space by getting ever finer characterization of the noise in your photos, but in almost all (and possibly all) cases it's not really necessary to use more than 8 bits for storing the raw scans. In fact, 8 bits is enough to store a final post-manipulation image as well because that already exceeds the tonal gradation that the eye can detect. It is only in the intermediate stage of image manipulation that more than 8 bits serves a useful purpose.

This is a lot of words to describe the processing of a signal that is done before it is converted to what we know of (and commonly referred to) as the output of an ADC, e.g. Sony's DSD (direct stream digital), which is effectively a 1 bit signal. It is not however a true ADC. All Sony did (and all you're really describing) is doing a fair amount of the stuff that you have to do anyway to get an ADC output, and just not doing the final part that results in an ADC output. Sony used it as a marketing gimmick to try to counter everybody just going to 24/96 (or 24/192) audio via traditional ADC technology, but the results aren't really any better (and depending on the application worse) than just doing the traditional high bit depth ADC route. Even drum scanners have ADCs that the PMTs feeds into. The PMT just gives you a significantly lower noise signal.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I don't know if this actually happens in practice, given the noise profiles.

Unless you're saving the raw sensor samples, yes, it happens. All colorspaces are gamma encoded. sRGB is ~2.4,. AdobeRGB is ~2.2 ProPhoto is 1.8, etc.... Gamma encoding takes discrete tone values from the highlights and gives them to the lower bit depth values so that you can encode 12-13 stops of DR into an 8bit file.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
This is a lot of words to describe the processing of a signal that is done before it is converted to what we know of (and commonly referred to) as the output of an ADC, e.g. Sony's DSD (direct stream digital), which is effectively a 1 bit signal. It is not however a true ADC. All Sony did (and all you're really describing) is doing a fair amount of the stuff that you have to do anyway to get an ADC output, and just not doing the final part that results in an ADC output. Sony used it as a marketing gimmick to try to counter everybody just going to 24/96 (or 24/192) audio via traditional ADC technology, but the results aren't really any better (and depending on the application worse) than just doing the traditional high bit depth ADC route. Even drum scanners have ADCs that the PMTs feeds into. The PMT just gives you a significantly lower noise signal.

Thanks for the comment Adrian.

Actually, what I discussed has very little relationship to Sony's DSD. At its heart, Sony's system basically takes a continuous analog signal and converts it to a series of pulses. What I described starts with a signal that is inherently pulsed and counts the pulses. To be more specific, a pulse counting system generally uses what is known as an amplifier/discriminator to detect the pules and converts them directly to logic pulses. These pulses can then be counted. A pulse counting system of the sort I described cannot operate on a continuous signal, and if the signal is inherently pulsed then there is no point in using Sony's DSD to process the signal. Therefore, aside from the fundamental difference between those approaches, there is almost a perfect non-overlap between the types of applications for those two approaches to signal processing.

For digitizing the signal from an electron multiplier, if the photon flux is not too high then the performance of an ADC can never exceed and can seldom equal the performance of a pulse counting system, especially in noise performance as well as the ease of establishing a true zero for a baseline.

It's even possible to design pulse counting systems that work at higher than normal count rates. It takes a special physical configuration. For example, one could adapt the approach used in one of my inventions, US patent number 5,777,326.

There are actually systems that combine pulse counting and ADCs, with pulse counting used at low signal levels and ADCs used at high signal levels. (Splicing those two ranges can be a little tricky, but we don't need to discuss that here.) Certain inductively coupled plasma mass spectrometers use this approach.

I don't know if drum scanners use ADCs or pulse counting. In the absence of other information I accept your assertion that they use ADCs. However, If I were to design a drum scanner starting with a blank sheet I would start by evaluating the feasibility of basing it on a pulse counting system and only use an ADC as a fallback position if a pulse counting approach were not feasible for some reason. If an ADC were indicated due to high signal levels I might even use the hybrid approach described in the previous paragraph in order to get superior performance at low signal levels, i.e. for the densest parts of the film. It would increase the cost slightly, but electronics are cheap these days, so the cost increment would be modest.

I fear that one point I was trying to make may have gotten lost in my discussion, which is that, all else being equal, if the digitization footprint is small (e.g. the film area corresponding to a pixel in the scanned image is small) then one can get away with a small bit depth, and if the digitization footprint is large one would need a larger bit depth. All else being equal, if the linear dimension of a pixel (referred to the film plane) is doubled then it takes an extra bit to faithfully capture the signal. Consequently, a scanner with high spatial resolution can use a smaller word length. That doesn't mean you have to use a smaller word length, but you can use a smaller word length.

Anyway, I have yet to see anyone demonstrate a case where film scanned in 8 bit mode on a scanner of high spatial resolution yields pictorially inferior results compared to film scanned at the same spatial resolution in 16 bit mode, provided that before any image manipulation of the 8 bit scan is done it is converted to 16 bits. If that workflow is not followed then one could have problems. For example, one should not do extreme image manipulation on the 8 bit image without first converting it to 16 bits. One should not even do smoothing (blurring) before conversion to 16 bits. I am fully prepared to be proven wrong by actual experimental results.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Unless you're saving the raw sensor samples, yes, it happens. All colorspaces are gamma encoded. sRGB is ~2.4,. AdobeRGB is ~2.2 ProPhoto is 1.8, etc.... Gamma encoding takes discrete tone values from the highlights and gives them to the lower bit depth values so that you can encode 12-13 stops of DR into an 8bit file.

As far as the fundamental issue is concerned (i.e. whether 8 bit scanning is sufficient to capture all pictorially relevant information), the relevant question in relation to gamma encoding is whether gamma encoding suppresses the grain (in combination with all other noise sources) completely, or does it leave some grain (and/or other noise) in the image. If it leaves some grain (and/or other noise) then higher bit depth is not useful. If it suppresses the grain completely then higher bit depth would be needed to capture all pictorially relevant information.

If one has a grainless image and a noiseless scanner then that is a case where higher bit depth can be useful.
 
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
One thing I have not considered in my posts in this thread is the effect of gamma encoding. I understand that scanners gamma encode the data, which is a non-linear process, rather than storing the pixels as a linear function of the raw data as digitized from the sensor. A non-linear function could result in a collapse of the signal into fewer bits in certain parts of the intensity range. I don't know if this actually happens in practice, given the noise profiles. However, if noise (including grain) shows up in the gamma encoded data then I think the analysis I have been giving still applies. In order for gamma encoding to invalidate this analysis it would require that noise that was originally present (which may be several ADC bits wide) to collapse into a distribution that is zero ADC bits wide... in other words a single number rather than a distribution of several numbers. As I mentioned above, I don't know if this happens. This would best be answered by experiments, and so far no one has shown that 8 bits is insufficient, given the workflow I outlined in previous posts.
Your post reminds me of a question I haven't answered to my own satisfaction. Should you scan flat (0-255) or set the black and white points (levels) for the scan to the range of the picture elements, or slightly beyond those points? Some people have argued that setting the levels for the scan produces more data in the scan file of the actual picture. My sense is that the scan program is just applying the levels to the data that comes from the scan before providing its completed file. So you're really not getting more data. I've tried comparing setting the levels for the scan and afterward in a post-processing program and haven't seen any differences. Any opinions on this?
 
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
Thanks for the comment Adrian.

Actually, what I discussed has very little relationship to Sony's DSD. At its heart, Sony's system basically takes a continuous analog signal and converts it to a series of pulses. What I described starts with a signal that is inherently pulsed and counts the pulses. To be more specific, a pulse counting system generally uses what is known as an amplifier/discriminator to detect the pules and converts them directly to logic pulses. These pulses can then be counted. A pulse counting system of the sort I described cannot operate on a continuous signal, and if the signal is inherently pulsed then there is no point in using Sony's DSD to process the signal. Therefore, aside from the fundamental difference between those approaches, there is almost a perfect non-overlap between the types of applications for those two approaches to signal processing.

For digitizing the signal from an electron multiplier, if the photon flux is not too high then the performance of an ADC can never exceed and can seldom equal the performance of a pulse counting system, especially in noise performance as well as the ease of establishing a true zero for a baseline.

It's even possible to design pulse counting systems that work at higher than normal count rates. It takes a special physical configuration. For example, one could adapt the approach used in one of my inventions, US patent number 5,777,326.

There are actually systems that combine pulse counting and ADCs, with pulse counting used at low signal levels and ADCs used at high signal levels. (Splicing those two ranges can be a little tricky, but we don't need to discuss that here.) Certain inductively coupled plasma mass spectrometers use this approach.

I don't know if drum scanners use ADCs or pulse counting. In the absence of other information I accept your assertion that they use ADCs. However, If I were to design a drum scanner starting with a blank sheet I would start by evaluating the feasibility of basing it on a pulse counting system and only use an ADC as a fallback position if a pulse counting approach were not feasible for some reason. If an ADC were indicated due to high signal levels I might even use the hybrid approach described in the previous paragraph in order to get superior performance at low signal levels, i.e. for the densest parts of the film. It would increase the cost slightly, but electronics are cheap these days, so the cost increment would be modest.

I fear that one point I was trying to make may have gotten lost in my discussion, which is that, all else being equal, if the digitization footprint is small (e.g. the film area corresponding to a pixel in the scanned image is small) then one can get away with a small bit depth, and if the digitization footprint is large one would need a larger bit depth. All else being equal, if the linear dimension of a pixel (referred to the film plane) is doubled then it takes an extra bit to faithfully capture the signal. Consequently, a scanner with high spatial resolution can use a smaller word length. That doesn't mean you have to use a smaller word length, but you can use a smaller word length.

Anyway, I have yet to see anyone demonstrate a case where film scanned in 8 bit mode on a scanner of high spatial resolution yields pictorially inferior results compared to film scanned at the same spatial resolution in 16 bit mode, provided that before any image manipulation of the 8 bit scan is done it is converted to 16 bits. If that workflow is not followed then one could have problems. For example, one should not do extreme image manipulation on the 8 bit image without first converting it to 16 bits. One should not even do smoothing (blurring) before conversion to 16 bits. I am fully prepared to be proven wrong by actual experimental results.
If you have to convert 8 bits to 16 before doing manipulation, wouldn't you just be better off scanning at 16 bit? Don't you lose something during the conversion? Even if you don't, why bother doing 8, to begin with?
 

Kodachromeguy

Subscriber
Joined
Nov 3, 2016
Messages
2,054
Location
Olympia, Washington
Format
Multi Format
This comment assumes that grain is the only source of noise. If other noise sources are present and if those noise sources are are large enough to show up in an 8 bit scan then the dithering from those sources of noise will mean that 8 bits is enough to avoid banding and going to more bits (e.g. 16) doesn't really help the image quality.
Alanrockwood, should we be treating grain in the emulsion as noise or is it really part of the image? I think of noise as a random process which may occur at different times. But if you scan a black and white negative 50 times, each bit of grain will be in the same place in each file. Therefore, it is not a random process but it's part of the image. I think grain is important to the overall look or emotion the image; we want it there. That may be why film simulation programs which add pseudo grain just don't look right.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
At its heart, Sony's system basically takes a continuous analog signal and converts it to a series of pulses. What I described starts with a signal that is inherently pulsed and counts the pulses. To be more specific, a pulse counting system generally uses what is known as an amplifier/discriminator to detect the pules and converts them directly to logic pulses. These pulses can then be counted. A pulse counting system of the sort I described cannot operate on a continuous signal, and if the signal is inherently pulsed then there is no point in using Sony's DSD to process the signal. Therefore, aside from the fundamental difference between those approaches, there is almost a perfect non-overlap between the types of applications for those two approaches to signal processing.

...sigh... And here I thought the output from a traditional ADC was referred to as PCM, aka, Pulse Code Modulation... Oh, wait, that's what it does actually stand for. All Sony's DSD does is chop the back end (the digital decimator) of a Delta-Sigma ADC off and instead just saves the 1 bit sample stream directly. If a traditional ADC can sample a pulsed signal (last I checked it can) and output a digitized pulsed signal, so can DSD because it's the same front end as an ADC, it just doesn't actually go through the steps of converting the 1 bit sample stream into a byte based numeric sample output that we call a sensor sample. You can actually take that 1 bit sample stream and do that conversion to 8 or 16 bit (or whatever bit depth you want) in software instead.

If you have a pulsed signal, how are you detecting the pulses? By sampling some signal at a very high sample rate and outputting a 1 or a 0 depending on if a pulse is happening? Like a 1 bit sampling system? Like the front end of pretty much every ADC ever? Hmm...

If you have a pulsed signal and you're counting the pulses to get to a single numeric sample value, that's effectively what a decimator does, yes? If not, then how are you getting the pulses into a form that all the downstream image processing will recognize (i.e. 8 or 16 bit samples)?

I fear that one point I was trying to make may have gotten lost in my discussion, which is that, all else being equal, if the digitization footprint is small (e.g. the film area corresponding to a pixel in the scanned image is small) then one can get away with a small bit depth, and if the digitization footprint is large one would need a larger bit depth. All else being equal, if the linear dimension of a pixel (referred to the film plane) is doubled then it takes an extra bit to faithfully capture the signal. Consequently, a scanner with high spatial resolution can use a smaller word length. That doesn't mean you have to use a smaller word length, but you can use a smaller word length.

To me, it sounds like you don't understand how ADCs work and are trying to use different words to describe what is happening. Decimating a signal always results in a higher bit depth result, but at a much lower sample rate output. In the realm image processing, that means you can sample (or scan) at a very high spatial resolution (i.e. the sample rate) at a low bit depth. Once you apply some decimation (other wise called downsampling, resizing the image down, etc), yes, you can end up with a higher bit depth image, albeit at a much lower spatial sampling rate (i.e. less resolution, less MP, etc.). That's how it has always worked. All you're doing is using different terms (digitization footprint) to describe what is effectively the sample rate.

This is basically coming up with different terms to describe what has always happened, and is not new information. That is my point.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
whether gamma encoding suppresses the grain (in combination with all other noise sources) completely, or does it leave some grain (and/or other noise) in the image

No, because in order to view the stored result, the opposite takes place and the inverse of the gamma encoding is applied to the sample to linearize it back out to a linear 12-13 (or more) stops of DR. You still only have 255 discrete tone values though unless you apply some other signal processing, like a decimation, or blur, etc. Then you can end up with more than 255 discrete tone values, but as soon as you apply the gamma encoding again and save it back out to 8 bits you lose that information. That's called quantization errors, and the more you do that, the more it degrades the signal. This is why it's best to just stay in 16 bit land if you're going to do a lot of post processing. 8 bit output is fine if you're just going to store it and look at it, but you still want to sample at a much higher bit depth to get that DR so that when it's gamma encoded the usage of your 8 bits is maximized. If your signal was never more than 8 bits to begin with, when the gamma encoding is applied, you'll lose a whole pile of discrete tone values and have a really degraded sample. In that case, it'd be best to just store the raw 8 bit samples, but if the density of the negative exceeds 2.4 log, you'll still have a DR problem where the densest part of the negative will either be very noisy, or just clip off because you only have 8 bits.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Alanrockwood, should we be treating grain in the emulsion as noise or is it really part of the image? I think of noise as a random process which may occur at different times. But if you scan a black and white negative 50 times, each bit of grain will be in the same place in each file. Therefore, it is not a random process but it's part of the image. I think grain is important to the overall look or emotion the image; we want it there. That may be why film simulation programs which add pseudo grain just don't look right.
I discussed this question somewhat already, but the thread is long, so it would be easy to miss. In an individual negative the grain is deterministic because it is frozen in place. If you scan the same negative twice the grain will be the same in the two scans, but it also has the properties of noise because there is no way to predict the locations, shape, and size of the grain features before hand.

Shot noise is truly random because if you repeat the scan a give noise feature derived from shot noise in the original scan will not repeat itself in the repeated scan. Shot noise comes from the random arrival of photons at the detector.

Electronic noise can be either random or deterministic, depending on its origin. For example, thermal noise (also known as Johnson noise) is truly random. On the other hand, 60 Hz. pickup is deterministic, or maybe I should say semi-deterministic because it depends on the phase of the power grid at the time the experiment is repeated compared to the phase of the experiment when the first experiment was performed. In any case, it is predictable, so it's deterministic.

Let's look at the problem of grain, starting at the very beginning, before the picture is even taken. For this part of the discussion we will assume that other noise sources, such as shot noise, don't exist.

When you load the film you have no idea where the grains of silver halide are in the film. You take the photo and develop the negative. The arrangement of silver particles in the developed film have the properties of noise because they are randomly arranged, and you can't predict before hand where they will appear in the image. The image looks grainy. Furthermore, there is no algorithm that will allow you to remove the grain without error. (One might think that smoothing/blurring the image would qualify, but it won't. For example, if the image is the sky and a bird happens to be in the sky when the image is made then blurring the image will alter the image of the bird.) If you load another piece of film of identical type (let us say fp4 from the same lot number) you can't predict where the grains will appear, and they will, in general, not be in the same positions as in the original piece of film. This all implies that from the point of view of capturing an image of the object, grain is noise.

Once an image is shot then the grain is frozen in and it becomes part of that specific image. For example, if you could do perfect scans on the same negative then the grain will always show up with the same pattern.

From the point of view of banding, the bit depth of the image does not matter, as long as the number of bits in the digital word are sufficient to capture the noise. If not enough bits are present to capture the noise then banding can occur under extreme manipulation of the image. If there are enough bits to capture the noise then using a word with more bits doesn't add anything useful to the image.

Nobody will be able to look at a grainy photo scanned with 8 bits or 16 bits or 32 bits or even 64 bits tell the difference between them, provided that the grain is pronounced enough to be well-captured with an 8 bit scan. This even applies if some rather extreme image manipulations are performed on the image after it is scanned. If the negative were truly grain free, or if the grain is so fine that it doesn't show up in an 8 bit scan, then this analysis does not apply. However, I don't think I have ever seen a truly grain free scan of a photographic film. Have you? I have never managed to produce such a scan. More commonly people complain that there is too much grain in scans.

None of this is new information. The information is out there. It's just not known as widely as it should be. In fact I will hazard a guess that most photographers don't know it, even those who scan a lot.

I have already mentioned that I am happy to be proven wrong if someone produces experimental results that contradict what I wrote above. However, the experiment must follow the proper workflow for the experiment to be valid. I have explained the proper work flow several times.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I've done a little more experimentation. I scanned a Tmax 100 negative and selected 8 bit gray scale for the storage mode. The scanner was a canon fs4000us in 4000 dpi mode. The image was the leader part of the film, including exposed and unexposed parts of the leader. It's very dusty because there's not much point in protecting a leader from dust, but this fact is irrelevant to the demo. Here's what the scanned slide looks like. It shows the crop window that I used for subsequent operations.

tmax 8 bit scan full slide showing showing crop area.JPG


All of the following images are from the cropped area. The first image below shows the cropped rectangle with no operations performed on the scan.

tmax crop 8 bit scan no adjustment.JPG


In the image above I'm showing a histogram adjustment window, but no adjustments have been made, as indicated in the percentage boxes. Note that there is a very subtle gradient in the image.

I then did the following adjustments in 8 bit mode using photoline software. First I did a 5 pixel Gaussian blur, followed by a histogram using the limits of 31% to 41% for the histogram, as seen in the histogram window. This workflow was intended to show that banding can occur if an image is scanned in 8 bit mode followed by the wrong workflow. As you can see, this workflow did produce a lot of banding.

tmax crop 8 bit scan 5 pixel Gaussian blur in 8 bit mode followed by 31 to 41 histogram.JPG



Then I did a slightly different workflow. For this one I started with the same cropped 8 bit image, but this time I first did a conversion to 16 bits. Then I smoothed with a 5 pixel radius followed by a 31% to 41% histogram. This is exactly what I did in the other workflow except that I started by converting the 8 bit image to 16 bits. As you can see, there is pixelation because it's a small crop window, but there is no banding. The gradients are nice and smooth.

tmax crop 8 bit scan converted to 16 bits then smoothed and histogramed.JPG



I think this pretty convincingly shows that the workflow I proposed worked like a charm to eliminate banding, even though the scan was only 8 bits. It also demonstrates the risk of using the wrong workflow.

I did some other things with this, but to avoid posting too many images I'm just going to describe the results. First, small crops of both the unexposed and exposed portions of the leader had distributions that were several bits wide. This is the condition that assures that banding does not occur.

Second, If I zoom in on a histogram to look at the distribution the histogram looks the same, regardless of whether it was an 8 bit histogram or a converted 16 bit histogram. There are the same number of peaks and they have the same magnitude relationships. Note, the peaks are one bit wide in either case. However, the peaks in the 16 bit converted image, while still one bit wide, are separated by runs of bits containing zeros. This is as it should be from theoretical considerations.

Third, I used Tmax 100 film for this because it is the finest grained film that I have available, and fine grain provides the most severe test of what I am claiming.

Fourth, I did this test on a negative with the most extreme regions because it provides the most severe test, and in particular it shows that gamma encoding of the saved image did not invalidate the concept. Gamma encoding is going to be most likely to over-ride my assertions in one of the density extremes. It doesn't really matter which one because I tested both extremes.

Fifth, It is highly unlikely that intermediate densities is going to give a different result.

Sixth, I also looked at histograms of crops from the film holder of the scanner, which showed up in the scans because I set the scan window large enough to include the frame of the film holder. These crops should be perfectly black, but they were not, and they also showed distributions that were several bits wide. I don't have a definitive explanation for this, but it is probably some combination of scattered light inside of the scanner, ambient light leaking into the scanner, and electronic noise in the scanner. It is worth mentioning that one of the advantages that some users have seen when comparing the Canon scanner to a Nikon scanner is slightly more noise in the high density part of a film when scanning with the Nikon. Otherwise they have basically the same scan quality, although the Canon scanner is much slower.

Seventh, scanning in 16 bit mode did not result in any improvement over scanning it 8 bit mode, provided that the correct work flow is followed. In particular, for practical purposes there was no improvement in dynamic range because dynamic range is limited by noise in the images, not the bit depth of the word used to store the results. Also, dynamic range depends on the effective size of the viewing window. In particular if the viewer's vision does not resolve the individual pixels then an effective signal averaging effect occurs such that the effective dynamic range is greater than that of the individual pixels. This is actually very well known. It's why half-tone printing works, even though at a fine level the paper is either black or white, which is effectively one bit of dynamic range at the small spatial scale.

I also did an experiment on a different piece of film leader. Part of the scan was dense film and part was just scanning air. The light part (i.e. air) showed a little bit of streakiness. This is not going to affect a film scan because even in an unexposed region the film is dense enough and grainy enough that the streaks won't be seen. I'm not sure what the origin of the streakiness is, but likely it has something to do with slight mismatches in the dark current of the sensor elements, such as different amounts of dark current.

Please, other interested users of Photrio, do some experiments and post the results. As I have already mentioned, I am prepared to have my assertion proven wrong by actual experimental results, but the experiments must follow the correct work flow. Otherwise the experiments would be testing an assertion that I did not make.
 
Last edited:
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
PS The fourth picture (8 bit edited) doesn't look like the second sample (16 bit original). It looks worse. So I don't know what you proved except that 8 and 16 bit scans look different even with editing.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
PS The fourth picture (8 bit edited) doesn't look like the second sample (16 bit original). It looks worse. So I don't know what you proved except that 8 and 16 bit scans look different even with editing.
There was no 16 bit original used for any of those shown images. There was only one scan for all of the images I showed, and it was an 8 bit scan.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Why not scan in 16 bit and avoid this rigamarole?

Like I said several times already, there is nothing wrong with scanning with 16 bits, provided you are satisfied with doubling the storage space used by your scan originals with no compensating benefit.

Also, one reason I am engaging this topics is that I was addressing the underlying issue from the original post. That user asked if it's possible to scan in 16 bits with silverfast, but the issue that underlies the question is whether 8 bits enough, although it was not explicitly framed in those terms.

Also, this thread is not the first where the issue of 8 bit scanning vs. 16 bit scanning has come up. There is almost a universal recommendation to scan in 16 bits. The usual reason given is to avoid the possibility of banding. I am trying to debunk that notion by showing through both theory and experiment that it is seldom if ever necessary to scan in 16 bits if the proper workflow is followed after the scan because the existence of grain in the scans provides a dithering function that makes it so that banding does not develop, even under extreme image manipulation.

This analysis does not apply if grain (or other forms of noise) is so fine as to not show up in the scan, which could happen if either the film is super-fine grained or if the quality of the scanner isn't good enough to capture grain. Note, it is not necessary that the scanner fully capture all of the grain information, it is sufficient if it captures just enough grain to provide effective dithering of the scan.

I have never seen any actual experimental results that disproves my point. Perhaps it could happen with some flatbed scanners that don't have quite enough resolution to capture some of the grain structure.
 
Last edited:
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
There was no 16 bit original used for any of those shown images. There was only one scan for all of the images I showed, and it was an 8 bit scan.
Then what are we comparing? There should also be a 16 bit scan there to see how it matches or doesn't match the fourth 8-bit result. The fourth picture shows a lot of blotches. Will a 16 bit scan look the same?
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Then what are we comparing? There should also be a 16 bit scan there to see how it matches or doesn't match the fourth 8-bit result. The fourth picture shows a lot of blotches. Will a 16 bit scan look the same?
I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.

I will do some more scans to answer your question about whether 8 bit scans look the same as 16 bit scans, but first a word of warning. Even two successive 16 bit scans are very likely to be slightly different. I have seen that. There are many factors that can contribute to the slight irreproducibility between scans, but two of the more likely factors are the reproducibility of the positioning of the film holders between scans and slight variations in the output of the scanners light source. The differences are small but not hard to find if you look hard enough. This applies to my Canon fs4000us scanner, and even more to my Epson V/750 scanner. In the case of my Epson scanner the irreproducibility of the positioning of the scanner head was so bad that it caused IR dust removal to not work reliably.
 
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.

I will do some more scans to answer your question about whether 8 bit scans look the same as 16 bit scans, but first a word of warning. Even two successive 16 bit scans are very likely to be slightly different. I have seen that. There are many factors that can contribute to the slight irreproducibility between scans, but two of the more likely factors are the reproducibility of the positioning of the film holders between scans and slight variations in the output of the scanners light source. The differences are small but not hard to find if you look hard enough. This applies to my Canon fs4000us scanner, and even more to my Epson V/750 scanner. In the case of my Epson scanner the irreproducibility of the positioning of the scanner head was so bad that it caused IR dust removal to not work reliably.
Do two scans one right after the other. The first at 8 bits and the second at 16 bits. Don;t move the film holder. The light output isn;t going to change if you don;t shut off the scanner and do one right after the other. Use the same shot. Use a chrome not a negative to eliminate the conversion issue. Then do your magic on the 8 bit file. Then show us both the 8 bit and 16 bit results so we can compare one against the other.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Do two scans one right after the other. The first at 8 bits and the second at 16 bits. Don;t move the film holder. The light output isn;t going to change if you don;t shut off the scanner and do one right after the other. Use the same shot. Use a chrome not a negative to eliminate the conversion issue. Then do your magic on the 8 bit file. Then show us both the 8 bit and 16 bit results so we can compare one against the other.
I am in the process of doing the scans right now and will report the results a little later.

However, first I need to comment on the position of the film holder. The scanner moves the film holder during the scan. During each subsequent scan the scanner repositions the film holder, nominally to start at the same position, but in actual fact the start positions are not perfectly reproducible, even if the next scan is started within a few seconds from the end of a scan.

I'm going to apologize in advance for my next post, which is going to contain the scans, because there will be a lot of images to post in order to see the comparative results.

Also to give preview the scanning scheme, I first scanned in 16 bits. Then I scanned in 8 bits, then I scanned in 16 bits. These were done right after another, without changing anything other than the bit depths for saving the scans. I intentionally sandwiched the 8 bit scan between two 16 bit scans.

When I show the results they will be in the following order. There will be a sixteen bit scan, another 16 bit scan, an 8 bit scan converted to 16 bits, and the same 8 bit scan but not converted to 16 bits.

I have performed the scan and I am about half way through the processing of the scans and doing the screen snips (using a snipping tool) to capture the results.

Also, to give a further preview comment, the two 16 bit scans are slightly different. The differences are very subtle, but it is possible to seen them under extreme image manipulation. This slight irreproducibility happens even if two 16 bit scans are done back to back without sandwiching an 8 bit scan between them. I'm not going to be showing results of those experiments. You will just have to take my word for it.

By the way, I am a retired PhD scientist. I am pretty familiar with the concept of experimental irreproducibility and also ways of minimizing it.

I will not be using a chrome. I will be using a negative scanned in image mode without conversion to a positive (the same negative I used in the last set of images I posted). Part of the reason I won't be using a chrome is because I am not sure I have a suitable chrome for the experiment, and I don't want to go searching through boxes of slides to find one. However, as I recall I always see at least a little grain evident when I scan a chrome, and to the extent that this holds true then the same theoretical arguments apply.

Stand by while I finish working on these files. Also, if you happen to have some chromes then maybe you could find a suitable one and scan it on your Epson flatbed scanner. However, keep in mind that the effective spatial resolution of the Epson scanners is not as high as the film scanner I am using, so there is no guarantee that the Epson scanner will capture sufficient grain to mitigate the problem of banding.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
It took a little longer than I expected to get back to this because I had an interruption, but now I have the results that Alan Klein requested... well more or less what he requested. As indicated in my last post, I scanned a Tmax negative, not a chrome.

I did three scans in the following order, a sixteen bit scan, followed by an 8 bit scan, and then another 16 bit scan. The three scans were under identical conditions except for the bit depth of the saved images, and they were done sequentially with little delay between the scans. I set the scanner to crop in the vertical direction because I don't need to show all of the slide to discuss the topic.

I apologize in advance for posting so many images, but in order to avoid questions about the experiment I need to post quite a few images.

The first image is a sixteen bit scan. I'm only showing one of the three scans because at this scale the scans look virtually identical.

2022-03-12-0009 - 16 bit scanned saved as jpg.jpg



Next I show four images. They are taken from a zoomed in portion near the middle of the slide, the exact same region in each case. I am showing them in the following order. 1) A 16 bit scan, 2) the other 16 bit scan, 3) the 8 bit scan altered after being converted to a sixteen bit image, and 4) the same 8 bit scan left as an 8 bit image. If you look really carefully you might notice some very subtle differences between the images, but the differences are very hard to see. (Note: you will see absolutely no difference between two versions of the 8 bit scan, the original version or the version that was converted to 16 bits.) For shorthand I am going to refer to these four images collectively as Group 1 images, and they form a starting point for what follows them.

2022-03-12-0009 - 16 bit zoomed in.jpg


2022-03-12-0011 - 16 zoomed in.jpg


2022-03-12-0010 - 8 bit converted to 16 bit zoomed in.JPG


2022-03-12-0010 - 8 bit zoomed in.JPG




The next four images were the same as the Group 1 images, except that a histogram correction was done on the images. The exact same correction was done on all four images. I expanded the horizontal scale by setting the histogram region limits to 28% and 34%. This increases contrast and emphasizes the grain in the scans as well as any possible the visible differences between the images. Now it's easy to see that the two sixteen bit scans, while very similar, are actually a little different from each other. For example, compare the non-white pixels toward the left side of the images. Also, compare the dark splotches. Take particular note of the third image. For this one I started with the 8 bit scan, converted it to 16 bits, and then did the histogram correction. It differs from either of the two images that were scanned in 16 bit mode, but the differences are very small and, at least to my eye it is no more different from the 16 bit scans than the two 16 bit scans are from each other. The fourth image is the 8 bit scan with the histogram correction applied. It looks exactly like the third image, as it must from fundamental considerations.


2022-03-12-0009 - 16 bit zoomed in and histogramed.jpg



2022-03-12-0011 - 16 zoomed in and histogramed.jpg



2022-03-12-0010 - 8 bit converted to 16 bit zoomed in and histogramed.JPG



2022-03-12-0010 - 8 bit zoomed in and histogramed.JPG



Now it gets more interesting. For the next four images I first applied a gaussian blur (5 pixel radius) and then applied the same histogram correction as in the previous four images (histogram limits 28% and 34%.) The first three are almost boringly similar, even though the third one started out as an 8 bit scan. But the fourth one, wow, look at the banding? That's the scan that started out as an 8 bit scan and stayed 8 bits throughout the processing chain. These images show yet one more time that if there is grain in an 8 bit scan then if you convert it the image to sixteen bits before applying further processing you can avoid banding, but if you don't convert to 16 bits then there is a risk of banding.

2022-03-12-0009 - 16 bit zoomed in and blurred and histogrammed.jpg



2022-03-12-0011 - 16 zoomed in and blurred and histogramed.jpg



2022-03-12-0010 - 8 bit converted to 16 bit zoomed in and blurred and histogramed.JPG



2022-03-12-0010 - 8 bit zoomed in and blurred and histogramed.JPG



So what does all this mean? It means that if your scan shows grain structure then you don't need more bits in order to capture all of the pictorially significant information in the image. However, if doing extreme image manipulation it can be beneficial to convert the image to a wider word, such as from 8 bit to 16 bit form, before doing any image manipulation.
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
In an effort to minimize subjectivity and maximize objectivity, here are 8 and 16 bit scans of two different step wedges. One step wedge is at 0.05 log increments over 41 steps and is roughly 2.05 log total density range, about what you'd expect for a "normal" bw negative. The other is 0.1 log increments over 41 steps for a total of 4.1 log density range. Each has been scanned to a raw DNG in both 8 bit scanning mode and 16 bit scanning mode on an Epson V850Pro and with Vuescan.

The file is here: http://m.avcdn.com/sfl/8vs16bit_step_wedges_revised.zip

Anybody can pull it down and do anything they want with the files to test things out for themselves.

A few observations:

I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.

Actually, you want to scan with 16 bits not really to avoid banding (that is one reason, but not the primary reason), but to get more dynamic range. Plain and simple. Take the 8 bit and 16 bit scans of either of the step wedges and overlay them over each other, then toggle between them. It's painfully obvious that the 16 bit scan is capturing more dynamic range and more discrete tone values than the 8 bit scan, even when viewing both with an 8 bit display. Even with the step wedge that is 2.05 density range, which is well inside what 8 bits should be able to capture. The bottom 3-4 bits is enough noise and such that you just don't get that many usable discrete tone values with an 8 bit scan.

This also translates to a superior 8 bit file if scanning at 16 bits then saving at 8 bits, especially if saved in a color space that has gamma encoding, so I'd be leery of saying an 8 bit scan is all you need. If you want to save your files at 8 bits, feel free to, though, if you're going to be doing a lot of edits or manipulations, it's best to have the 16 bit data, especially if doing color work. Black and white is more tolerant and posterization and banding won't show up as quickly, but if doing color work, 8 bit falls apart pretty quick.

After compression (lossless of course), the size difference between 8 and 16 bit tif files isn't that large, so there's almost no practical impetus to save at 8 bits, as it doesn't cut your disk usage in half if you use compression when saving the tif files.

*EDIT* - Revised the zip file, updated the link.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
In an effort to minimize subjectivity and maximize objectivity, here are 8 and 16 bit scans of two different step wedges. One step wedge is at 0.05 log increments over 41 steps and is roughly 2.05 log total density range, about what you'd expect for a "normal" bw negative. The other is 0.1 log increments over 41 steps for a total of 4.1 log density range. Each has been scanned to a raw DNG in both 8 bit scanning mode and 16 bit scanning mode on an Epson V850Pro and with Vuescan.

The file is here: http://m.avcdn.com/sfl/8vs16bit_step_wedges.zip

Anybody can pull it down and do anything they want with the files to test things out for themselves.

A few observations:



Actually, you want to scan with 16 bits not really to avoid banding (that is one reason, but not the primary reason), but to get more dynamic range. Plain and simple. Take the 8 bit and 16 bit scans of either of the step wedges and overlay them over each other, then toggle between them. It's painfully obvious that the 16 bit scan is capturing more dynamic range and more discrete tone values than the 8 bit scan, even when viewing both with an 8 bit display. Even with the step wedge that is 2.05 density range, which is well inside what 8 bits should be able to capture. The bottom 3-4 bits is enough noise and such that you just don't get that many usable discrete tone values with an 8 bit scan.

This also translates to a superior 8 bit file if scanning at 16 bits then saving at 8 bits, especially if saved in a color space that has gamma encoding, so I'd be leery of saying an 8 bit scan is all you need. If you want to save your files at 8 bits, feel free to, though, if you're going to be doing a lot of edits or manipulations, it's best to have the 16 bit data, especially if doing color work. Black and white is more tolerant and posterization and banding won't show up as quickly, but if doing color work, 8 bit falls apart pretty quick.

After compression (lossless of course), the size difference between 8 and 16 bit tif files isn't that large, so there's almost no practical impetus to save at 8 bits, as it doesn't cut your disk usage in half if you use compression when saving the tif files.

Hi Adrian, Something is wrong with your 410 tiff files. Both are 16 bits, so I have not attempted to evaluate the 410 files.

So I had a look at the 205 files. As far as dynamic range in these files is concerned the question is whether one can distinguish between the steps over the entire range of the step wedge. As shown below, the steps are well-distinguished at both ends of the range for both the 8 bit and the 16 files.

I am attaching files demonstrating this. First I will compare the steps at the dark end of the 205 step wedge, which is more challenging end. The first thing I did was to convert the 8 bit tiff file to 16 bits. This is in accord with the workflow I have specified throughout this discussion. Then I did histogram correction to the dark end of the two files. The amount of histogram correction was almost but not quite the same for the two files. I gave them enough correction to distinguish step number 41 from step number 40 while at the same time making the gradient was almost the same between the 8-to-16 bit file and the 8 bit file. Here are the images, starting with the 8-to-16 bit file.

wedge8_205_dark_side.JPG


wedge16_205_dark_side.JPG



The 40th step is definitely distinguishable from the 41st step in both cases.

Next, I do the same thing for the light end of the step wedge.

wedge8_205_light_side.JPG


wedge16_205_light_side.JPG


Again, the steps are well-differentiated at this end of the step wedge.

There's no need to do this for the middle of the step wedge because those steps are distinguishable without fiddling with the files further.

The upshot is that as far as the dynamic range is concerned, there is no evidence in these two files that there is more dynamic range in the 16 bit file than in the 8 bit file.

Note: I didn't try this experiment on the 8 bit file without converting it to 16 bits first because that would violate the workflow that I have specified.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
... After compression (lossless of course), the size difference between 8 and 16 bit tif files isn't that large, so there's almost no practical impetus to save at 8 bits, as it doesn't cut your disk usage in half if you use compression when saving the tif files.

I actually tested this idea on one file and reported the results in post number 65 of this thread. The 8 bit scan reduced by 35.8% upon compression. The 16 bit file increased in size by 15.5% upon "compression."

I decided to do the same thing with your 205 files step wedge tiff files. I loaded both into photoline. There are several lossless compression options to save tiff files from photoline. Who has time to try them all? So I tried the first compression format in the menu, which was LZW-packed. Here are the results. The 8 bit tiff went from 344 KB to 87.7 KB. The compressed file is about 25% the size of the uncompressed file, a very useful amount of compression. Most likely it did well because there is very little detail in the step wedge file. The 16 bit tiff went from 674 KB to 486 KB. The compressed file is about 72% of the uncompressed file. That's a useful amount of compression, but not all that impressive compared to compression of the 8 bit file. The compression for the 16 bit file was about three times worse than the 8 bit file. Looked at another way, the sized of the compressed 8 bit file only 18% that of the compressed 16 bit file, and the 16 bit file does not contain more useful pictorial information than the 8 bit file (assuming one can even use the term "pictorial" in reference to a step wedge.)
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom