- Joined
- Oct 11, 2006
- Messages
- 2,184
- Format
- Multi Format
The Nyquist sampling theorem says that it is possible to exactly reconstruct a bandwidth limited image if the image is sampled at a rate greater than the Nyquist limit. The Nyquist limit is twice the frequency of the highest frequency in the image. Here we are referring to spatial frequency, and "bandwidth limited" means that there is a frequency limit above which there are no frequency components in the limit.
Sometimes we misunderstand the implications of the Nyquist sampling theorem. While it does say that it is possible to reconstruct the image (i.e. the signal) without error, it does not say that the sampling itself is produces exact reconstruction of the original image. Let me illustrate this with a simple one-dimensional example.
First we have a simulated image as it would exist prior to sampling.
View attachment 225062
By the way, this image is a simple sinusoidal function (a cosine function).
Next we have the result of sampling this image at 1.1 times the Nyquist limit. This sampling rate is fast enough that in theory it is possible to exactly reconstruct the original image in this post.
View attachment 225064
Note that this is a considerable distortion compared to the original signal. Rather than being a simple sinusoidal function that reconstructs the original function, it shows a strong beat pattern. This definitively shows that the sampled image itself does not necessarily accurately reconstruct the original signal, even if the sampling rate satisfies the Nyquist theorem. It would require a more sophisticated calculation to accurately reconstruct the original function from the sampled result. Simply taking the sampled function itself as the reconstruction is not sufficient and is prone to error.
I could add some other interesting examples showing how things get better if the sampling rate is even higher. However, to keep this simple let me defer doing this and just discuss the implications of what I have shown above.
So, what are the implications? Well, in discussions of resolution of scanned images the topic of the Nyquist limit often arises and it is stated that the sampling rate needs to be above the Nyquist limit if the scanner is going to resolve a repetitive image (lines). This is true, but it doesn't fully capture the importance of sampling at an even higher rate because if the sampling rate is only slightly higher than the Nyquist limit then repetitive features in images may be significantly distorted if the sampled result itself is taken as the image reconstruction (which it is not.) Higher order analysis must be done in order to provide an accurate reconstruction. Since few if any scanning systems today attempt to do more sophisticated reconstructions it is important that the sampling rate be significantly higher than the Nyquist limit to assure that the scan provides an accurate reproduction of the image.
By the way, a similar analysis applies to images acquired by digital cameras.
No system is perfect. Even film introduces its own kind of error because it rolls contrast off as the spatial frequency increases, regardless of what the actually contrast is at that resolution.
No, that's not correct. A triangle wave has higher odd harmonics. So a 20 kHz triangle wave also has 60 kHz, 100 kHz, 140 kHz etc components. So when sampling a 20 kHz triangle wave with 44.1 kHz for CD, the higher harmonics will be suppressed by the anti-aliasing filter before the ADC and the residues will alias to lower frequencies. For example, the residue of the 60 kHz harmonic alias to 15.9 kHz on the CD. When playing the CD back, the original 20 kHz triangle wave has become a 20 kHz sine wave since the higher harmonics are gone. And because there is a reconstruction low pass filter at 20-ish kHz on the play-back side, two sample points is enough define the 20 kHz sine wave. Don't forget that the reconstruction filter is an essential part of the Nyquist theorem implementation.The idea behind the Nyquist theory only really works if your highest frequencies are pure triangle waves whose peaks occur directly at the intervals at which the samples were taken. It will turn a sine wave into a triangle wave, and destroy a square wave.
Sorry, I forgot to address your question. The sampling rate in an image relates the spatial distance between the sampling points, just as the sampling rate in audio recording relates to the the time between the sampling points. People in the imaging business often refer to things like "spatial frequency" in analogy to "frequency" in audio recording.Can you define what you mean by 'sampling rate' as it applies to scanners or digital sensors? I'm having a hard time transferring what I learned about signals and systems in engineering school some 30 ya to this topic.
Actually, what I describe is not aliasing. The signal was a pure sinusoid sampled at a sampling rate higher than the Nyquist limit. There is no possibility of aliasing in that case, since there were no frequency components above the Nyquist limit.What you described in the OP is aliasing which, for an optical system, means you are optically resolution limited.
You want the system to be sensor resolution limited for this specific reason (and others) whenever possible.
While there are analogies between a 1-dimensional linear system and 2-dimensional linear systems, this is one detail where it is a poor comparison. You need to understand why Nyquist is different for spatial signals than for temporal signals (and why your analogy is a poor one in this case...ie where the comparison is breaking down).
-Jason
Edit to add: Refer to “Fourier Optics” by Goodman and “Electro-Optical Imaging System Performance” by Holst
Actually, what I describe is not aliasing. The signal was a pure sinusoid sampled at a sampling rate higher than the Nyquist limit. There is no possibility of aliasing in that case, since there were no frequency components above the Nyquist limit.
What is being illustrated in my first post is the fact that one cannot simply take the sampled function and accept that as an accurate representation of the original signal. One must apply a correct transformation in order to do an error-free reconstruction of the original signal. One way to do that is by applying the Whittaker–Shannon interpolation formula. This can reconstruct the original signal without error. Note, however, that there is no way to reconstruct an error-free original signal while still retaining the same sampling points.
For example, if one uses the Whittaker–Shannon interpolation formula to reconstruct the original signal and then samples again at the same sampling points one will obtain exactly the same result as the original sampling, a signal with beat patterns in this case. However, if one uses the Whittaker–Shannon interpolation formula and resamples on a denser set of sampling points one can get a better representation of the original signal. For example, if you double the number of sampling points it would eliminate the beat pattern in the digitized signal. (I'm talking about doubling the number of points in reconstructed signal, after the digitized signal has passed through an appropriate reconstruction filter, not doubling the number of sampling points in the original digitization process, although that would also work.)
Count the antinodes in the top figure of my first post. (There are 41.) Now count the number of sampling points in the bottom figure. (There are 45.) The Nyquist limit corresponds to 41 sampling points, the same as the number of antinodes in the top figure. Since the bottom figure contained 45 sampling points, the sampling rate is faster than the Nyquist limit (by 10%), so there is no aliasing.I propose that, by definition, the Nyquist frequency is the frequency that you are sampling at. You are in essence trying to say that you are sampling at a sampling rate higher than the sampling rate of the linear system, which is non-sensical. So you need to be careful with your definitions.
Again, I point to the references I listed. Online forums are poor platforms for getting into the type of discussion this topic deserves.
The bandwidth of CD's was set using the Nyquist Theorem at 44.1 kHz, thinking that even the best of human hearing can't hear above 20 kHz (though probably 16-18 kHz would have been more accurate as mostly only small children can hear up to 20kHz, maybe 22kHz). In the real world, however, even adults with less than stellar hearing have been shown to distinguish the difference between 44.1kHz and 88.2kHz sampling rates in double blind studies (though not all). I've participated in one such study myself, and could easily hear the difference, even though my hearing tops out at around 15kHz (to be fair, I am a musician). For lack of a better word, there's more sparkle or air in the 88.2kHz sampled audio tracks. Today, there are consumer level products available capable of sample rates of up to 192kHz for this reason (though that might be overkill). I will say, that like many things in life, your ability to hear detail in sound is less dependent on your actual ability to hear, but rather more tied to your brain's ability to process what it hears. This is why so many of the best audio producers and mixers in the world are middle aged. Despite their lack of raw hearing ability. They have spent a lifetime training their brain to understand what they hear, which gives them an edge over the average person, even one with technically better hearing. It's like how professional baseball players can see the rotation of a fastball, where to the rest of us, it's just a white blur.
Let's focus for a minute on the definition of the Nyquist frequency.Alan, I’m quite familiar with this stuff as well...decades’ worth of optical and imaging system design familiar. Nyquist frequency corresponds to the rate at which you sample. I contest that you are ignoring the fact that, once you expand the system to include the second sampler you are not acknowledging that your Nyquist frequency has changed.
You’re artificially declaring Nyquist to be a value based on only part of the full linear system. It would be like inferring a system MTF based only on the MTF of the detector. No, you’ve only identified the detector MTF.
You really are showing an example of aliasing whether you acknowledge it or not. That’s classic aliasing. Or sub-sampling, some people like to think of it like that.
I get where you’re going, though...This is a discussion of scanning at a higher resolution than film grain. In single point detection systems (or arrays with low fill factor), the rule of thumb is to design the optics to have a PSF ~1.3 times the detector size or pitch to keep the signal modulation of point sources to a controllable amount. So, in those systems the Nyquist is established to be at least 1.3x of the highest expected spatial frequency, then the signal/image processing (or point source tracking) algorithms are designed based on that valid assumption...i.e. don’t have to accommodate significant aliasing downstream.
Personally I scan at 3600 dpi to produce 600 dpi images (so 6x my final image resolution). About 15 years ago I did some calculations to determine at that scanning resolution the sampling-related maximum contrast modulation would be an acceptably undetectable amount — don’t recall the actual number, maybe like 5-10% — if reducing the image resolution to 600dpi or less. That factors in realistic assumptions of the reduced contrast when you’re down the MTF curve of a real lens, which you should also be considering innyour example.
Let's focus for a minute on the definition of the Nyquist limit, and assume for simplicity that we are dealing with a simple sinusoidal signal with a frequency of F. The Nyquist limit is, by definition, sampling the signal at a rate of 2F. Any sampling rate greater than 2F is not undersampled since, by definition, Undersampling only occurs when the sampling rate is less than 2F.Alan, I’m quite familiar with this stuff as well...decades’ worth of optical and imaging system design familiar. Nyquist frequency corresponds to the rate at which you sample. I contest that you are ignoring the fact that, once you expand the system to include the second sampler you are not acknowledging that your Nyquist frequency has changed.
You’re artificially declaring Nyquist to be a value based on only part of the full linear system. It would be like inferring a system MTF based only on the MTF of the detector. No, you’ve only identified the detector MTF.
You really are showing an example of aliasing whether you acknowledge it or not. That’s classic aliasing. Or sub-sampling, some people like to think of it like that.
I get where you’re going, though...This is a discussion of scanning at a higher resolution than film grain. In single point detection systems (or arrays with low fill factor), the rule of thumb is to design the optics to have a PSF ~1.3 times the detector size or pitch to keep the signal modulation of point sources to a controllable amount. So, in those systems the Nyquist is established to be at least 1.3x of the highest expected spatial frequency, then the signal/image processing (or point source tracking) algorithms are designed based on that valid assumption...i.e. don’t have to accommodate significant aliasing downstream.
Personally I scan at 3600 dpi to produce 600 dpi images (so 6x my final image resolution). About 15 years ago I did some calculations to determine at that scanning resolution the sampling-related maximum contrast modulation would be an acceptably undetectable amount — don’t recall the actual number, maybe like 5-10% — if reducing the image resolution to 600dpi or less. That factors in realistic assumptions of the reduced contrast when you’re down the MTF curve of a real lens, which you should also be considering innyour example.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?