- Joined
- Oct 11, 2006
- Messages
- 2,185
- Format
- Multi Format
Implicit in what I wrote is that the number of bits sufficient to adequately capture a signal depends on the footprint of the capture window. The footprint may be measured in time increments or area increments, depending on the application. For example, the sampling footprint could be area increments if the problem is film scanning. The sampling footprint could be time increments if one is doing audio recording. There is always an increase in noise if the sampling footprint becomes smaller. Therefore, the smaller the sampling footprint is the fewer the bits needed to capture the signal.
I spent a fair amount of my career dealing with instrumentation that acquire signal through detectors that generate pulses, sometimes electron multipliers (to detect ions in a mass spectrometer) and sometimes photomultipliers (to detect photons in an optical experiment). From the point of view of signal acquisition there's no difference between electron multipliers and photomultipliers, so let's frame most of the discussion in terms of photomultipliers unless otherwise noted.
When a photon hits a photomultiplier it generates a pulse of electrons at the output end of the photomultiplier. The pulse is typically a few billionths of a second, or even less than a billionth of a second in some devices, and the pulse typically contains something like a million electrons. (It can be more in some devices or less in others.) Those signal levels are low enough that photons can be counted individually, as long as the light flux hitting the detector is less than, let us say, about a hundred million photons per second. If the acquisition footprint is few billionths of a second then all that is needed to acquire all of the data in the signal is one bit. If the intent is to aggregate the data into larger time increments, let us say time increments of ten billionths of a second, then one can sum the individual counts, and in that case a larger word size is needed to hold the summed result. But the word size can still be pretty small if the aggregated time windows are small. This is not a contrived example. The numbers given are roughly to the scale of what one would see in a time of flight mass spectrometer, and some time of flight mass spectrometers acquire signal through a counting mode of data acquisition.
In the case of film scanning, there are actually scanners that use photomultipliers to detect light, such as drum scanners. I don't know if any of them use photon counting to acquire the signal. However, if they are doing pulse counting then at the lowest level they are effectively doing one-bit conversion of the analog signal to a digital form. This would be occurring on a nanosecond time scale. Then they would be aggregating those counts into larger time windows (which would map onto larger spatial windows on the film plane), and for those larger time windows more bits would be needed, and the bigger the aggregated time windows are the more bits are needed to hold the counts. However, at the lowest level, a single bit is all that is needed, or in other words, that's all the dynamic range that is needed.
What about higher bits when scanning film? Suppose, for example, that I wanted to build a scanner with a dynamic range of sixteen million on a linear scale or 7.2 on a log scale? I could do that. It would require a 24 bit A/D converter. Those exist. (Well, maybe I couldn't design it, but a good engineer could do it.) But it would be meaningless because most of the resolution of the A/D converter would wasted by getting highly accurate characterization of the noise. In fact for film scanning a 16 bit A/D converter is wasting bits (i.e. wasting dynamic range) by acquiring really accurate numbers for the noise (i.e. grain and other forms of noise) when that doesn't add anything at all to the pictorial information in the film. In fact, in most cases (I will say almost all cases, and probably all cases of practical interest) 8 bits is good enough to scan a black and white negative because you are already well into the noise level (i.e. grain and other forms of noise) at that point.
I demonstrated this principle already using acquired data, at least in a small way, and I explained the theoretical foundation for this. I did not cover all possible cases. I just showed a couple of them. However, I have yet to see anyone demonstrate that 8 bits are insufficient to acquire an image from a conventional black and white negative if they use the work flow I described. Would someone please do some experiments to try to demonstrate otherwise? Use Tmax 100 film or Acros because those are the films that would be most likely to show that my assertion fails. I am perfectly willing to be proven wrong.
Now, if there is a film that is the next thing to grainless and which has a very high density range (Velvia? or maybe microfilm processed in a pictorial mode?) then all bets are off. I'm not saying that 8 bits would definitely be insufficient, but only that it might not be sufficient. And even in that case it's only going to matter if one plans to do extreme image manipulation after the scan is acquired. Otherwise 8 bits is plenty because that already exceeds the tonal gradation that the eye can detect.
Lest anyone misunderstand my position, I am not saying that there is anything technically wrong with 16 bit scans, that is unless you like to waste hard disk storage space by getting ever finer characterization of the noise in your photos, but in almost all (and possibly all) cases it's not really necessary to use more than 8 bits for storing the raw scans. In fact, 8 bits is enough to store a final post-manipulation image as well because that already exceeds the tonal gradation that the eye can detect. It is only in the intermediate stage of image manipulation that more than 8 bits serves a useful purpose.
I don't know if this actually happens in practice, given the noise profiles.
This is a lot of words to describe the processing of a signal that is done before it is converted to what we know of (and commonly referred to) as the output of an ADC, e.g. Sony's DSD (direct stream digital), which is effectively a 1 bit signal. It is not however a true ADC. All Sony did (and all you're really describing) is doing a fair amount of the stuff that you have to do anyway to get an ADC output, and just not doing the final part that results in an ADC output. Sony used it as a marketing gimmick to try to counter everybody just going to 24/96 (or 24/192) audio via traditional ADC technology, but the results aren't really any better (and depending on the application worse) than just doing the traditional high bit depth ADC route. Even drum scanners have ADCs that the PMTs feeds into. The PMT just gives you a significantly lower noise signal.
Unless you're saving the raw sensor samples, yes, it happens. All colorspaces are gamma encoded. sRGB is ~2.4,. AdobeRGB is ~2.2 ProPhoto is 1.8, etc.... Gamma encoding takes discrete tone values from the highlights and gives them to the lower bit depth values so that you can encode 12-13 stops of DR into an 8bit file.
Your post reminds me of a question I haven't answered to my own satisfaction. Should you scan flat (0-255) or set the black and white points (levels) for the scan to the range of the picture elements, or slightly beyond those points? Some people have argued that setting the levels for the scan produces more data in the scan file of the actual picture. My sense is that the scan program is just applying the levels to the data that comes from the scan before providing its completed file. So you're really not getting more data. I've tried comparing setting the levels for the scan and afterward in a post-processing program and haven't seen any differences. Any opinions on this?One thing I have not considered in my posts in this thread is the effect of gamma encoding. I understand that scanners gamma encode the data, which is a non-linear process, rather than storing the pixels as a linear function of the raw data as digitized from the sensor. A non-linear function could result in a collapse of the signal into fewer bits in certain parts of the intensity range. I don't know if this actually happens in practice, given the noise profiles. However, if noise (including grain) shows up in the gamma encoded data then I think the analysis I have been giving still applies. In order for gamma encoding to invalidate this analysis it would require that noise that was originally present (which may be several ADC bits wide) to collapse into a distribution that is zero ADC bits wide... in other words a single number rather than a distribution of several numbers. As I mentioned above, I don't know if this happens. This would best be answered by experiments, and so far no one has shown that 8 bits is insufficient, given the workflow I outlined in previous posts.
If you have to convert 8 bits to 16 before doing manipulation, wouldn't you just be better off scanning at 16 bit? Don't you lose something during the conversion? Even if you don't, why bother doing 8, to begin with?Thanks for the comment Adrian.
Actually, what I discussed has very little relationship to Sony's DSD. At its heart, Sony's system basically takes a continuous analog signal and converts it to a series of pulses. What I described starts with a signal that is inherently pulsed and counts the pulses. To be more specific, a pulse counting system generally uses what is known as an amplifier/discriminator to detect the pules and converts them directly to logic pulses. These pulses can then be counted. A pulse counting system of the sort I described cannot operate on a continuous signal, and if the signal is inherently pulsed then there is no point in using Sony's DSD to process the signal. Therefore, aside from the fundamental difference between those approaches, there is almost a perfect non-overlap between the types of applications for those two approaches to signal processing.
For digitizing the signal from an electron multiplier, if the photon flux is not too high then the performance of an ADC can never exceed and can seldom equal the performance of a pulse counting system, especially in noise performance as well as the ease of establishing a true zero for a baseline.
It's even possible to design pulse counting systems that work at higher than normal count rates. It takes a special physical configuration. For example, one could adapt the approach used in one of my inventions, US patent number 5,777,326.
There are actually systems that combine pulse counting and ADCs, with pulse counting used at low signal levels and ADCs used at high signal levels. (Splicing those two ranges can be a little tricky, but we don't need to discuss that here.) Certain inductively coupled plasma mass spectrometers use this approach.
I don't know if drum scanners use ADCs or pulse counting. In the absence of other information I accept your assertion that they use ADCs. However, If I were to design a drum scanner starting with a blank sheet I would start by evaluating the feasibility of basing it on a pulse counting system and only use an ADC as a fallback position if a pulse counting approach were not feasible for some reason. If an ADC were indicated due to high signal levels I might even use the hybrid approach described in the previous paragraph in order to get superior performance at low signal levels, i.e. for the densest parts of the film. It would increase the cost slightly, but electronics are cheap these days, so the cost increment would be modest.
I fear that one point I was trying to make may have gotten lost in my discussion, which is that, all else being equal, if the digitization footprint is small (e.g. the film area corresponding to a pixel in the scanned image is small) then one can get away with a small bit depth, and if the digitization footprint is large one would need a larger bit depth. All else being equal, if the linear dimension of a pixel (referred to the film plane) is doubled then it takes an extra bit to faithfully capture the signal. Consequently, a scanner with high spatial resolution can use a smaller word length. That doesn't mean you have to use a smaller word length, but you can use a smaller word length.
Anyway, I have yet to see anyone demonstrate a case where film scanned in 8 bit mode on a scanner of high spatial resolution yields pictorially inferior results compared to film scanned at the same spatial resolution in 16 bit mode, provided that before any image manipulation of the 8 bit scan is done it is converted to 16 bits. If that workflow is not followed then one could have problems. For example, one should not do extreme image manipulation on the 8 bit image without first converting it to 16 bits. One should not even do smoothing (blurring) before conversion to 16 bits. I am fully prepared to be proven wrong by actual experimental results.
Alanrockwood, should we be treating grain in the emulsion as noise or is it really part of the image? I think of noise as a random process which may occur at different times. But if you scan a black and white negative 50 times, each bit of grain will be in the same place in each file. Therefore, it is not a random process but it's part of the image. I think grain is important to the overall look or emotion the image; we want it there. That may be why film simulation programs which add pseudo grain just don't look right.This comment assumes that grain is the only source of noise. If other noise sources are present and if those noise sources are are large enough to show up in an 8 bit scan then the dithering from those sources of noise will mean that 8 bits is enough to avoid banding and going to more bits (e.g. 16) doesn't really help the image quality.
At its heart, Sony's system basically takes a continuous analog signal and converts it to a series of pulses. What I described starts with a signal that is inherently pulsed and counts the pulses. To be more specific, a pulse counting system generally uses what is known as an amplifier/discriminator to detect the pules and converts them directly to logic pulses. These pulses can then be counted. A pulse counting system of the sort I described cannot operate on a continuous signal, and if the signal is inherently pulsed then there is no point in using Sony's DSD to process the signal. Therefore, aside from the fundamental difference between those approaches, there is almost a perfect non-overlap between the types of applications for those two approaches to signal processing.
I fear that one point I was trying to make may have gotten lost in my discussion, which is that, all else being equal, if the digitization footprint is small (e.g. the film area corresponding to a pixel in the scanned image is small) then one can get away with a small bit depth, and if the digitization footprint is large one would need a larger bit depth. All else being equal, if the linear dimension of a pixel (referred to the film plane) is doubled then it takes an extra bit to faithfully capture the signal. Consequently, a scanner with high spatial resolution can use a smaller word length. That doesn't mean you have to use a smaller word length, but you can use a smaller word length.
whether gamma encoding suppresses the grain (in combination with all other noise sources) completely, or does it leave some grain (and/or other noise) in the image
I discussed this question somewhat already, but the thread is long, so it would be easy to miss. In an individual negative the grain is deterministic because it is frozen in place. If you scan the same negative twice the grain will be the same in the two scans, but it also has the properties of noise because there is no way to predict the locations, shape, and size of the grain features before hand.Alanrockwood, should we be treating grain in the emulsion as noise or is it really part of the image? I think of noise as a random process which may occur at different times. But if you scan a black and white negative 50 times, each bit of grain will be in the same place in each file. Therefore, it is not a random process but it's part of the image. I think grain is important to the overall look or emotion the image; we want it there. That may be why film simulation programs which add pseudo grain just don't look right.
There was no 16 bit original used for any of those shown images. There was only one scan for all of the images I showed, and it was an 8 bit scan.PS The fourth picture (8 bit edited) doesn't look like the second sample (16 bit original). It looks worse. So I don't know what you proved except that 8 and 16 bit scans look different even with editing.
Why not scan in 16 bit and avoid this rigamarole?
Then what are we comparing? There should also be a 16 bit scan there to see how it matches or doesn't match the fourth 8-bit result. The fourth picture shows a lot of blotches. Will a 16 bit scan look the same?There was no 16 bit original used for any of those shown images. There was only one scan for all of the images I showed, and it was an 8 bit scan.
I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.Then what are we comparing? There should also be a 16 bit scan there to see how it matches or doesn't match the fourth 8-bit result. The fourth picture shows a lot of blotches. Will a 16 bit scan look the same?
Do two scans one right after the other. The first at 8 bits and the second at 16 bits. Don;t move the film holder. The light output isn;t going to change if you don;t shut off the scanner and do one right after the other. Use the same shot. Use a chrome not a negative to eliminate the conversion issue. Then do your magic on the 8 bit file. Then show us both the 8 bit and 16 bit results so we can compare one against the other.I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.
I will do some more scans to answer your question about whether 8 bit scans look the same as 16 bit scans, but first a word of warning. Even two successive 16 bit scans are very likely to be slightly different. I have seen that. There are many factors that can contribute to the slight irreproducibility between scans, but two of the more likely factors are the reproducibility of the positioning of the film holders between scans and slight variations in the output of the scanners light source. The differences are small but not hard to find if you look hard enough. This applies to my Canon fs4000us scanner, and even more to my Epson V/750 scanner. In the case of my Epson scanner the irreproducibility of the positioning of the scanner head was so bad that it caused IR dust removal to not work reliably.
I am in the process of doing the scans right now and will report the results a little later.Do two scans one right after the other. The first at 8 bits and the second at 16 bits. Don;t move the film holder. The light output isn;t going to change if you don;t shut off the scanner and do one right after the other. Use the same shot. Use a chrome not a negative to eliminate the conversion issue. Then do your magic on the 8 bit file. Then show us both the 8 bit and 16 bit results so we can compare one against the other.
I was demonstrating that with the correct workflow you don't get banding when doing an 8 bit scan. The possibility of banding is the usual reason for recommending a 16 bit scan over an 8 bit scan if extreme image manipulations are to be performed on the image after scanning. That is why I focused on that issue.
In an effort to minimize subjectivity and maximize objectivity, here are 8 and 16 bit scans of two different step wedges. One step wedge is at 0.05 log increments over 41 steps and is roughly 2.05 log total density range, about what you'd expect for a "normal" bw negative. The other is 0.1 log increments over 41 steps for a total of 4.1 log density range. Each has been scanned to a raw DNG in both 8 bit scanning mode and 16 bit scanning mode on an Epson V850Pro and with Vuescan.
The file is here: http://m.avcdn.com/sfl/8vs16bit_step_wedges.zip
Anybody can pull it down and do anything they want with the files to test things out for themselves.
A few observations:
Actually, you want to scan with 16 bits not really to avoid banding (that is one reason, but not the primary reason), but to get more dynamic range. Plain and simple. Take the 8 bit and 16 bit scans of either of the step wedges and overlay them over each other, then toggle between them. It's painfully obvious that the 16 bit scan is capturing more dynamic range and more discrete tone values than the 8 bit scan, even when viewing both with an 8 bit display. Even with the step wedge that is 2.05 density range, which is well inside what 8 bits should be able to capture. The bottom 3-4 bits is enough noise and such that you just don't get that many usable discrete tone values with an 8 bit scan.
This also translates to a superior 8 bit file if scanning at 16 bits then saving at 8 bits, especially if saved in a color space that has gamma encoding, so I'd be leery of saying an 8 bit scan is all you need. If you want to save your files at 8 bits, feel free to, though, if you're going to be doing a lot of edits or manipulations, it's best to have the 16 bit data, especially if doing color work. Black and white is more tolerant and posterization and banding won't show up as quickly, but if doing color work, 8 bit falls apart pretty quick.
After compression (lossless of course), the size difference between 8 and 16 bit tif files isn't that large, so there's almost no practical impetus to save at 8 bits, as it doesn't cut your disk usage in half if you use compression when saving the tif files.
... After compression (lossless of course), the size difference between 8 and 16 bit tif files isn't that large, so there's almost no practical impetus to save at 8 bits, as it doesn't cut your disk usage in half if you use compression when saving the tif files.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?