Accuracy of sampling images near the Nyquist limit

Rebel

A
Rebel

  • 0
  • 0
  • 0
Watch That First Step

A
Watch That First Step

  • 0
  • 0
  • 0
Barn Curves

A
Barn Curves

  • 0
  • 0
  • 0
img421.jpg

H
img421.jpg

  • Tel
  • Apr 26, 2025
  • 1
  • 1
  • 30

Forum statistics

Threads
197,483
Messages
2,759,779
Members
99,514
Latest member
cukon
Recent bookmarks
0

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
The Nyquist sampling theorem says that it is possible to exactly reconstruct a bandwidth limited image if the image is sampled at a rate greater than the Nyquist limit. The Nyquist limit is twice the frequency of the highest frequency in the image. Here we are referring to spatial frequency, and "bandwidth limited" means that there is a frequency limit above which there are no frequency components in the limit.

Sometimes we misunderstand the implications of the Nyquist sampling theorem. While it does say that it is possible to reconstruct the image (i.e. the signal) without error, it does not say that the sampling itself is produces exact reconstruction of the original image. Let me illustrate this with a simple one-dimensional example.

First we have a simulated image as it would exist prior to sampling.

simulated image on film.jpg

By the way, this image is a simple sinusoidal function (a cosine function).

Next we have the result of sampling this image at 1.1 times the Nyquist limit. This sampling rate is fast enough that in theory it is possible to exactly reconstruct the original image in this post.
image sampled at 1 point 1 times Nyquist limit.jpg

Note that this is a considerable distortion compared to the original signal. Rather than being a simple sinusoidal function that reconstructs the original function, it shows a strong beat pattern. This definitively shows that the sampled image itself does not necessarily accurately reconstruct the original signal, even if the sampling rate satisfies the Nyquist theorem. It would require a more sophisticated calculation to accurately reconstruct the original function from the sampled result. Simply taking the sampled function itself as the reconstruction is not sufficient and is prone to error.

I could add some other interesting examples showing how things get better if the sampling rate is even higher. However, to keep this simple let me defer doing this and just discuss the implications of what I have shown above.

So, what are the implications? Well, in discussions of resolution of scanned images the topic of the Nyquist limit often arises and it is stated that the sampling rate needs to be above the Nyquist limit if the scanner is going to resolve a repetitive image (lines). This is true, but it doesn't fully capture the importance of sampling at an even higher rate because if the sampling rate is only slightly higher than the Nyquist limit then repetitive features in images may be significantly distorted if the sampled result itself is taken as the image reconstruction (which it is not.) Higher order analysis must be done in order to provide an accurate reconstruction. Since few if any scanning systems today attempt to do more sophisticated reconstructions it is important that the sampling rate be significantly higher than the Nyquist limit to assure that the scan provides an accurate reproduction of the image.

By the way, a similar analysis applies to images acquired by digital cameras.
 
Last edited:

rbultman

Member
Joined
Sep 1, 2012
Messages
411
Location
Louisville,
Format
Multi Format
Can you define what you mean by 'sampling rate' as it applies to scanners or digital sensors? I'm having a hard time transferring what I learned about signals and systems in engineering school some 30 ya to this topic.
 

spijker

Member
Joined
Mar 20, 2007
Messages
620
Location
Ottawa, Canada
Format
Medium Format
Moire patterns are a typical example of subsampling or aliasing. The pixel density (sampling rate) is too low for the detail density and the original image cannot be reproduced from the captured data. That's why camera manufacturers put an anti-aliasing filter on sensors. It acts as a low pass filter that suppresses (blurs) the image detail that is too fine to be captured accurately with the pixel density of the underlying sensor.
 

rbultman

Member
Joined
Sep 1, 2012
Messages
411
Location
Louisville,
Format
Multi Format
So in the spacial case, the sampling rate is really the pitch between sensing elements, right? I suppose the size of the sensing site plays a role too in capturing details. One criticism of cameras with AA filters is that the details are blurry, as you say. Also, as you say, the lack of an AA filter can lead to sharper images in the details at the expense of moire in images with repeating patterns such as fabric.

I'm not sure how you could do 'higher order analysis' in order to recover (or remove) additional information. At best I think you could detect the moire in post and do something to mitigate that. The problem is that once the higher frequency data is aliased in, I'm not sure it is possible to filter out the aliased data. I am in no way close to being an expert (or even a novice) in this area, I'm just applying lessons from school 30 ya.

Do such higher order analyses exist? What is required in order for them to succeed? Is an accurate model of the imaging system needed in order to do these analyses?
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
One way to recover the original function from the sampled result is to apply the Whittaker–Shannon interpolation formula. Basically, that formula convolutes the sampled function with a sinc function, which I believe is is equivalent to applying a brick wall filter to the sampled result. If anyone wants to see the equations then let me know and I will post them.

What I just said applies to a one-dimensional function, like the one in my original post or a function one might acquire from an audio stream. I assume there is a close equivalent to the Whittaker–Shannon interpolation formula for a two-dimensional sampled function, like an image from a scanner or digital camera.
 

wombat2go

Member
Joined
Jul 21, 2013
Messages
352
Location
Michigan
Format
Medium Format
The following might show pictorially how the photo can be converted into the frequency domain, processed,
then converted back to the spatial domain.

Here is scan of a photo using SMC Takumar 6X7 1:2.8 90mm on the home brew camera and Fuju 169NS home developed C41.
Scan was a 4133 x 3512 pixel tif ( above is a down sample) of the negative ( 70 X 55 mm) using the PrimeFilm Pro 120 film scanner.
https://app.box.com/s/mgnkw173ij8p7orfnxd9p394efn3t7im

Note that there are more vertical striations than horizontal, and with the lens opened up, blur in both foreground and background.

A Fourier transform is applied to the scan.

Here is a highly logarithmically amplified version of the frequency spectrum, just for viewing. That is to allow the low level high order harmonics to be visible without saturating the low order harmonics.
This is showing that there is information out toward the 4000th harmonic but I am sure it is such a low level that it does not affect the image quality.
https://app.box.com/s/7x6h8clqiev1uxwkr8mbbfv68csnpond
If it did, then the scanner would be below the required Nyquist frequency.

The edges of the square are out at the 4000th harmonic.
The centre of the square ( coordinate 0,0) has the "DC" constant level.
The first order sinewave has a period of 70mm, and the 4000th harmonic has a period of 70mm/4000 ( I think)
Note that this image has higher order horizontal harmonics than vertical, I suppose due to the vertical striations.


While the image is in the frequency domain, a de-convolution of the optical blur can be applied.

Here I assumed Gaussian blur, and after a few tries, guessed at the sigma, applying de-convolution using
an inverted Gaussian to boost the higher harmonics slightly.


Converted back to image, the deconvoluted image is a little sharper with slightly increased depth of field in the mid distance.
https://app.box.com/s/yae94qhs78om2doknrh2o9t8711m28xd
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
The Nyquist sampling theorem says that it is possible to exactly reconstruct a bandwidth limited image if the image is sampled at a rate greater than the Nyquist limit. The Nyquist limit is twice the frequency of the highest frequency in the image. Here we are referring to spatial frequency, and "bandwidth limited" means that there is a frequency limit above which there are no frequency components in the limit.

Sometimes we misunderstand the implications of the Nyquist sampling theorem. While it does say that it is possible to reconstruct the image (i.e. the signal) without error, it does not say that the sampling itself is produces exact reconstruction of the original image. Let me illustrate this with a simple one-dimensional example.

First we have a simulated image as it would exist prior to sampling.

View attachment 225062
By the way, this image is a simple sinusoidal function (a cosine function).

Next we have the result of sampling this image at 1.1 times the Nyquist limit. This sampling rate is fast enough that in theory it is possible to exactly reconstruct the original image in this post.
View attachment 225064
Note that this is a considerable distortion compared to the original signal. Rather than being a simple sinusoidal function that reconstructs the original function, it shows a strong beat pattern. This definitively shows that the sampled image itself does not necessarily accurately reconstruct the original signal, even if the sampling rate satisfies the Nyquist theorem. It would require a more sophisticated calculation to accurately reconstruct the original function from the sampled result. Simply taking the sampled function itself as the reconstruction is not sufficient and is prone to error.

I could add some other interesting examples showing how things get better if the sampling rate is even higher. However, to keep this simple let me defer doing this and just discuss the implications of what I have shown above.

So, what are the implications? Well, in discussions of resolution of scanned images the topic of the Nyquist limit often arises and it is stated that the sampling rate needs to be above the Nyquist limit if the scanner is going to resolve a repetitive image (lines). This is true, but it doesn't fully capture the importance of sampling at an even higher rate because if the sampling rate is only slightly higher than the Nyquist limit then repetitive features in images may be significantly distorted if the sampled result itself is taken as the image reconstruction (which it is not.) Higher order analysis must be done in order to provide an accurate reconstruction. Since few if any scanning systems today attempt to do more sophisticated reconstructions it is important that the sampling rate be significantly higher than the Nyquist limit to assure that the scan provides an accurate reproduction of the image.

By the way, a similar analysis applies to images acquired by digital cameras.

Nyquist is the minimum, not a “that’s all you need”. As you approach the limit, unless your sample points are at exactly the points of change in the signal you’re sampling, it will introduce error into the sampled signal. In reality, you want to over-sample quite a bit above the highest spatial frequency you require accuracy for.

No system is perfect. Even film introduces its own kind of error because it rolls contrast off as the spatial frequency increases, regardless of what the actually contrast is at that resolution.
 

wombat2go

Member
Joined
Jul 21, 2013
Messages
352
Location
Michigan
Format
Medium Format
No system is perfect. Even film introduces its own kind of error because it rolls contrast off as the spatial frequency increases, regardless of what the actually contrast is at that resolution.

Yes, The C41 dye clouds (I think) become visible especially iso 400 and 800 35mm film.
For example iso 800 35mm:
https://app.box.com/s/omlyj78yl9m884dq7kdv

I have tried introducing notch filters without much success.
I think the dye clouds have in themselves a distribution of frequencies that extend down into the wanted image frequencies.

So these days I like to use medium format iso 100, 160, 200 films
 

jim10219

Member
Joined
Jun 15, 2017
Messages
1,634
Location
Oklahoma
Format
4x5 Format
I don't know much about how it applies to the visual world. But in the audio world, it's heavily flawed. The idea behind the Nyquist theory only really works if your highest frequencies are pure triangle waves whose peaks occur directly at the intervals at which the samples were taken. It will turn a sine wave into a triangle wave, and destroy a square wave. And in audio, waves are rarely pure. Plus, there are frequencies beyond the range of human hearing that, while you can't hear them directly, you can hear the effects of their interference patterns on waves within the audible spectrum. So while you may not be able to hear them directly, you can notice their presence or absence.

The bandwidth of CD's was set using the Nyquist Theorem at 44.1 kHz, thinking that even the best of human hearing can't hear above 20 kHz (though probably 16-18 kHz would have been more accurate as mostly only small children can hear up to 20kHz, maybe 22kHz). In the real world, however, even adults with less than stellar hearing have been shown to distinguish the difference between 44.1kHz and 88.2kHz sampling rates in double blind studies (though not all). I've participated in one such study myself, and could easily hear the difference, even though my hearing tops out at around 15kHz (to be fair, I am a musician). For lack of a better word, there's more sparkle or air in the 88.2kHz sampled audio tracks. Today, there are consumer level products available capable of sample rates of up to 192kHz for this reason (though that might be overkill). I will say, that like many things in life, your ability to hear detail in sound is less dependent on your actual ability to hear, but rather more tied to your brain's ability to process what it hears. This is why so many of the best audio producers and mixers in the world are middle aged. Despite their lack of raw hearing ability. They have spent a lifetime training their brain to understand what they hear, which gives them an edge over the average person, even one with technically better hearing. It's like how professional baseball players can see the rotation of a fastball, where to the rest of us, it's just a white blur.

My point is, the Nyquist Theorem is overly simplistic and doesn't accurately translate analog to digital like it was originally thought to have. At best, it's a starting point, and will generally give acceptable results. After all, CD's, with their 44.1kHz sample rates, were prized for their fidelity over older analog media (and certainly do a better job with high frequency reproduction than even vinyl). And these days, people often listen to music on MP3's, AAC, and other compressed formats which generally do an even worse job at sound reproduction than CD's (though not always). And higher sampling rate formats like DVD Audio and SACD have been discontinued because the general public just didn't see the need for such high quality audio, when the older methods sounded good enough in real world environments (even a modest 70's hifi system puts the average high end Bluetooth speaker to shame) and didn't require updating their entire collection. So it might better be seen as a theory for good enough, and not necessarily a theory to depict accuracy.
 

spijker

Member
Joined
Mar 20, 2007
Messages
620
Location
Ottawa, Canada
Format
Medium Format
The idea behind the Nyquist theory only really works if your highest frequencies are pure triangle waves whose peaks occur directly at the intervals at which the samples were taken. It will turn a sine wave into a triangle wave, and destroy a square wave.
No, that's not correct. A triangle wave has higher odd harmonics. So a 20 kHz triangle wave also has 60 kHz, 100 kHz, 140 kHz etc components. So when sampling a 20 kHz triangle wave with 44.1 kHz for CD, the higher harmonics will be suppressed by the anti-aliasing filter before the ADC and the residues will alias to lower frequencies. For example, the residue of the 60 kHz harmonic alias to 15.9 kHz on the CD. When playing the CD back, the original 20 kHz triangle wave has become a 20 kHz sine wave since the higher harmonics are gone. And because there is a reconstruction low pass filter at 20-ish kHz on the play-back side, two sample points is enough define the 20 kHz sine wave. Don't forget that the reconstruction filter is an essential part of the Nyquist theorem implementation.

The Nyquist theorem is solid. It's a mathematical theorem that's been around for a long time. It's the implementation where the compromises/flaws are. For CD, the 20 kHz bandwidth and the 44.1 kHz sampling frequency leaves only the range from 20 kHz to 22.05 kHz for the anti-aliasing filter. Designing a low pass filter (analog or digital) that drops 96 dB (for 16-bit audio) over that 2.05 kHz is not easy and will create all kinds of artifacts. That's where the advantage lies with higher sample rates, more space for the anti-aliasing filter on the ADC side and reconstruction filter on the DAC side.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Both jim10219 and spijker make good points. First, spijker is correct in saying that the Nyquist theorem is mathematically solid and that there can be compromises/flaws in the implementation of an audio reproduction system that is based on sampling.

jim10219 is right in saying that at the Nyquist sampling rate the sampled result is a triangle wave, provided that the sampling occurs exactly at the peaks of the sine wave. Furthermore, I will make the comment that if a pure sine wave is sampled exactly at the nodes (i.e. the zero-crossing points) then the resulting sampled function is zero everywhere, which is an even greater distortion. Part of the problem is that the Nyquist theorem does not say that sampling at twice the highest frequency is good enough. Instead the theorem says that the sampling rate needs to be greater than twice the highest frequency. (In mathematical terms one would use a symbol representing "greater than" rather than a symbol representing "greater than or equal to".) However, as my original post pointed out, even sampling at a rate that satisfies the Nyquist theorem can sometimes be problematical if one insists on assuming that the sampled result is a good representation of the original function. Instead it is necessary to use a mathematically correct signal reconstruction filter, such as a brick wall filter, and (as a subtle point not yet mentioned) a correct reproduction also requires filling in points between the sampled points.

The the Whittaker–Shannon interpolation formula can do this. The the Whittaker–Shannon interpolation formula basically applies what is known as a "brick wall filter" to the sampled wave form. Note that I said the filter is applied to the sampled wave form, i.e. the operation of reconstructing the original wave form from the digitized version, which is ideally done by playing it back through a brick wall filter. This will perfectly restore the original function, provided that the original function was "bandwidth limited", which means that the original function contained no frequency components that are above the Nyquist limit. This requirement for a bandwidth limitation on the original function generally means that a filter needs to be applied before digitization occurs. That's the anti-aliasing filter. It must be an analog filter, and in the spirit of spijker's comments it is not always easy to design a good analog filter that can satisfy the technical requirements, especially if the frequency cutoff of the filter needs to be extremely sharp, such as is the case with audio CDs.

There is actually a way around the sharp cutoff problem, and that is to sample the wave form at a higher rate than one thinks is needed (for example, sampling an audio wave form at 88kHz rather than 44kHz). This can make the analog cutoff filter easier to design. Then one applies an appropriate digital filter to the digitized function and resamples the result to a lower frequency (e.g. 44 kHz). In effect, one shifts the majority of the problem from the analog regime to the digital regime. That does not necessarily make the filtering problem easier because it may take a powerful computer, combined with good algorithms, to do the digital filtering, but powerful hardware and software is cheap these days, compared to the cost when audio CD standards were being formed.

Jim is also right when he points out that frequencies above the audio cutoff can also affect audio sensations, or at least I assume he is correct. I don't have actual knowledge on that topic, but I accept Jim's assertions. However, it is easy to posit a physical mechanism that can account for this. For example, if the physical system responsible for hearing (for example, the hair cells in the ear that respond to sound) has a non-linear response, then there will be frequency mixing, and signals above the nominal hearing threshold can end up producing a response below the nominal frequency cutoff and therefore contribute to the audio sensation. I won't go deeply into the math on this, but I will give an example. Suppose that a violin is playing a note at a frequency of 13 kHz. There will be overtones at higher frequencies, such as 26 kHz and 39 kHz. The components at 26 kHz and 39 kHz can be mixed by the non-linearities in the hearing system to produce responses at other frequencies, one of them being 13 kHz, which is in the audio range. Furthermore, overtones are generally not at perfect integer multiples of the fundamental. For example, there could be overtones at 26.02 and 39.03 kHz. These could mix in the hearing system to produce something at 13.01 kHz, and this would be heard at a note very close to the 13 kHz of the fundamental. In fact, because of the slight frequency shift between 13.01 and 13 kHz it would produce an audio beat pattern showing up at 10Hz. A perfect CD recording system would eliminate the 26.02 and 38.03 overtones which means that the playback would be missing the 13.01 kHz component that would be heard by a listener of the original performance.

This post is focused mainly on CD recording and playback technology in order to respond to some other comments. However, much of it applies to digital photography (both hybrid and direct digital) as well. I won't go discuss the analogies right new because this post is already rather long, but trust me, there are many analogies between digital audio recording technologies and digital image capture.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Can you define what you mean by 'sampling rate' as it applies to scanners or digital sensors? I'm having a hard time transferring what I learned about signals and systems in engineering school some 30 ya to this topic.
Sorry, I forgot to address your question. The sampling rate in an image relates the spatial distance between the sampling points, just as the sampling rate in audio recording relates to the the time between the sampling points. People in the imaging business often refer to things like "spatial frequency" in analogy to "frequency" in audio recording.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
By the way, one implication of what is evident in my original post is that a really good imaging system would produce a recorded image with the points spaced twice as close as the sampled points, i.e. four times as many points as sampling points when accounting for the fact that the signal is a two-dimensional image. If the signal is processed correctly (e.g. run through a Whittaker–Shannon filter prior to resampling at a lower rate) it will eliminate the beat patterns I have shown.

This is apart from things like the anti-aliasing filter applied prior to digitization. I am assuming that a perfect anti-aliasing filter is present. Things can be worse if the anti-aliasing filter is not there.
 

Nodda Duma

Subscriber
Joined
Jan 22, 2013
Messages
2,686
Location
Batesville, Arkansas
Format
Multi Format
What you described in the OP is aliasing which, for an optical system, means you are optically resolution limited.

You want the system to be sensor resolution limited for this specific reason (and others) whenever possible.

While there are analogies between a 1-dimensional linear system and 2-dimensional linear systems, this is one detail where it is a poor comparison. You need to understand why Nyquist is different for spatial signals than for temporal signals (and why your analogy is a poor one in this case...ie where the comparison is breaking down).

-Jason

Edit to add: Refer to “Fourier Optics” by Goodman and “Electro-Optical Imaging System Performance” by Holst
 
Last edited:

wombat2go

Member
Joined
Jul 21, 2013
Messages
352
Location
Michigan
Format
Medium Format
For the 6X7, 6X9 medium format C41 negs, I usually scan to an image about 4500 pixels wide.
There are considerations other than aliasing, for example monitor specs, file bit depth, disc space and processing time.
I make 16 bit tiffs and view them on the Eizo here which can display 10 bit depth/ channel.

Does 4500 pixels on 120 color film put the Nyquist frequency high enough?
To help answer that, here is the "go to" inverse Gaussian I select initially for trial optical de-convolution.
( Not much to see, it is like looking down on a tornado.)
https://app.box.com/s/k40r7z2wfrt3phw8s8gryp3dlxhdqnl4

The scan frequency is at the edges of the square.
Note that all the action to de convolute the lens etc is, usually, less than a quarter of the scan frequency.

The operations are done in floating point to preserve fidelity, during the conversions to frequency domain and back,
and during the math operations in frequency domain.

The inverse Gaussians are 8000 pixel square and tiff format is 32 bit floating point. About 120 MB each.
They were taking too long to generate , so I have pre-generated a range of them from sigma 50 to 72,
with a script running overnight and saved on a USB.
Now call one each time for de-convolution trial on each neg.

Lens de-convolution in frequency domain is effectively "treble boost" following the tornado shaped curve.
Along with selecting Sigma,
I can adjust the de-convolution term's amplitude ( height of the tornado == peak of treble boost) to get the best looking image.


----------------------------

Some Refences to add to Noddas'

John B Williams "Image Clarity -High Resolution Photography" Published 1990 (before digital photography and FFT methods were common.)

Holst and Lomheim "CMOS/CCD Sensors and Camera Systems" Published 2011

R.L. Easton "Fourier Methods in Imaging" Published 2010
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
What you described in the OP is aliasing which, for an optical system, means you are optically resolution limited.

You want the system to be sensor resolution limited for this specific reason (and others) whenever possible.

While there are analogies between a 1-dimensional linear system and 2-dimensional linear systems, this is one detail where it is a poor comparison. You need to understand why Nyquist is different for spatial signals than for temporal signals (and why your analogy is a poor one in this case...ie where the comparison is breaking down).

-Jason

Edit to add: Refer to “Fourier Optics” by Goodman and “Electro-Optical Imaging System Performance” by Holst
Actually, what I describe is not aliasing. The signal was a pure sinusoid sampled at a sampling rate higher than the Nyquist limit. There is no possibility of aliasing in that case, since there were no frequency components above the Nyquist limit.

What is being illustrated in my first post is the fact that one cannot simply take the sampled function and accept that as an accurate representation of the original signal. One must apply a correct transformation in order to do an error-free reconstruction of the original signal. One way to do that is by applying the Whittaker–Shannon interpolation formula. This can reconstruct the original signal without error. Note, however, that there is no way to reconstruct an error-free original signal while still retaining the same sampling points.

For example, if one uses the Whittaker–Shannon interpolation formula to reconstruct the original signal and then samples again at the same sampling points one will obtain exactly the same result as the original sampling, a signal with beat patterns in this case. However, if one uses the Whittaker–Shannon interpolation formula and resamples on a denser set of sampling points one can get a better representation of the original signal. For example, if you double the number of sampling points it would eliminate the beat pattern in the digitized signal. (I'm talking about doubling the number of points in reconstructed signal, after the digitized signal has passed through an appropriate reconstruction filter, not doubling the number of sampling points in the original digitization process, although that would also work.)
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
There is an additional subtlety that is often missed. There are actually two ways to consider the signal in an optical system. One is to consider the signal in terms of the amplitude of the electric vectors that define the motion of light through an optical system. (For this discussion we don't need to consider the additional complication of the magnetic vectors.) The other is to consider the the magnitude of the signal. The magnitude is basically the absolute value of the electric vectors.

When light travels through an optical system it produces a diffraction pattern. When seen in the amplitude picture the diffraction pattern can have a pattern with both positive and negative parts in the pattern. When seen in the magnitude picture there are no negative parts, only positive parts. That is because anything that was negative in the amplitude picture becomes positive in the magnitude picture because the magnitude picture is an absolute value picture.

Detectors such as film or digital image sensors operate in the magnitude mode, so they never see negative amounts of light. The lowest they can go is zero. (Negative values can be detected in other types of experiments that are sensitive to phase differences, such as experiments that measure diffraction effects.)

This has some effect on sampling of a digital image. If we think in terms of one dimensional images (e.g. a diffraction patter from a grid) then in the amplitude picture there could be a repetitive pattern with positive and negative values at a period of, let us say, P, but in the magnitude picture (such as if a photographic plate is used) there will be a repetitive pattern at a period of P/2, or in other words the lines are twice as close together as in the amplitude picture. Note, to repeat a point made earlier, we normally don't see things in the amplitude picture, only in the magnitude picture.

So, if we could see things in the amplitude picture (which we normally can't do) it would be, in some sense, a closer analogy to an audio signal, which is digitized in an amplitude picture (which by the way, preserves phase information). Nevertheless, even if we can only see an image in the magnitude mode, virtually all of the results from sampling theory still apply. It's just that the signals we see are only positive. (By the way, this is reflected in my original figures, which intentionally have no negative values for this very reason.)
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Commenting further on the distinction between audio signals and images, the main difference (other than phase issues) is that audio signals are one-dimensional, whereas images are two dimensional. This implies, for example, that when considering Fourier transforms one would use a one-dimensional Fourier transform when processing a one-dimensional signal but a two-dimensional Fourier transform when processing an image. An exception to this is if the image is a series of parallel lines. In that case a one-dimensional FT is all one needs.
 

Nodda Duma

Subscriber
Joined
Jan 22, 2013
Messages
2,686
Location
Batesville, Arkansas
Format
Multi Format
Actually, what I describe is not aliasing. The signal was a pure sinusoid sampled at a sampling rate higher than the Nyquist limit. There is no possibility of aliasing in that case, since there were no frequency components above the Nyquist limit.

What is being illustrated in my first post is the fact that one cannot simply take the sampled function and accept that as an accurate representation of the original signal. One must apply a correct transformation in order to do an error-free reconstruction of the original signal. One way to do that is by applying the Whittaker–Shannon interpolation formula. This can reconstruct the original signal without error. Note, however, that there is no way to reconstruct an error-free original signal while still retaining the same sampling points.

For example, if one uses the Whittaker–Shannon interpolation formula to reconstruct the original signal and then samples again at the same sampling points one will obtain exactly the same result as the original sampling, a signal with beat patterns in this case. However, if one uses the Whittaker–Shannon interpolation formula and resamples on a denser set of sampling points one can get a better representation of the original signal. For example, if you double the number of sampling points it would eliminate the beat pattern in the digitized signal. (I'm talking about doubling the number of points in reconstructed signal, after the digitized signal has passed through an appropriate reconstruction filter, not doubling the number of sampling points in the original digitization process, although that would also work.)

I propose that, by definition, the Nyquist frequency is the frequency that you are sampling at. You are in essence trying to say that you are sampling at a sampling rate higher than the sampling rate of the linear system, which is non-sensical. So you need to be careful with your definitions.

Again, I point to the references I listed. Online forums are poor platforms for getting into the type of discussion this topic deserves. :smile:
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
I propose that, by definition, the Nyquist frequency is the frequency that you are sampling at. You are in essence trying to say that you are sampling at a sampling rate higher than the sampling rate of the linear system, which is non-sensical. So you need to be careful with your definitions.

Again, I point to the references I listed. Online forums are poor platforms for getting into the type of discussion this topic deserves. :smile:
Count the antinodes in the top figure of my first post. (There are 41.) Now count the number of sampling points in the bottom figure. (There are 45.) The Nyquist limit corresponds to 41 sampling points, the same as the number of antinodes in the top figure. Since the bottom figure contained 45 sampling points, the sampling rate is faster than the Nyquist limit (by 10%), so there is no aliasing.

You can easily reproduce the results I got in my calculation. The period in the top curve is 0.25 units. The Nyquist limit is therefore to sample at every 0.125 units. If you want to reproduce my figure exactly you can sample the curve 10% faster than the Nyquist rate. If you do that you will see the same beat pattern that I got. If you need any more help in reproducing my calculation I will be happy to supply any information you want.

By the way, I am quite familiar with the Nyquist limit and Fourier transforms. In fact, I published several scientific papers in which Fourier transforms played a key role, and I was involved in the design of a couple of Fourier transform mass spectrometers.
 

Adrian Bacon

Member
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
The bandwidth of CD's was set using the Nyquist Theorem at 44.1 kHz, thinking that even the best of human hearing can't hear above 20 kHz (though probably 16-18 kHz would have been more accurate as mostly only small children can hear up to 20kHz, maybe 22kHz). In the real world, however, even adults with less than stellar hearing have been shown to distinguish the difference between 44.1kHz and 88.2kHz sampling rates in double blind studies (though not all). I've participated in one such study myself, and could easily hear the difference, even though my hearing tops out at around 15kHz (to be fair, I am a musician). For lack of a better word, there's more sparkle or air in the 88.2kHz sampled audio tracks. Today, there are consumer level products available capable of sample rates of up to 192kHz for this reason (though that might be overkill). I will say, that like many things in life, your ability to hear detail in sound is less dependent on your actual ability to hear, but rather more tied to your brain's ability to process what it hears. This is why so many of the best audio producers and mixers in the world are middle aged. Despite their lack of raw hearing ability. They have spent a lifetime training their brain to understand what they hear, which gives them an edge over the average person, even one with technically better hearing. It's like how professional baseball players can see the rotation of a fastball, where to the rest of us, it's just a white blur.

In audio land, a higher sample rate also provides superior stereo spatial imaging over a lower one. We might not be able to hear the actual frequencies above 16-18Khz, but we have two ears that are far enough apart that we can tell where stuff is in a stereo sound stage. Granted, not all music is going to have a lot of truly stereo content, but the reason why a live recording of a symphony sounds better at higher sampling rates has more to do with a more precise stereo image than actually capturing higher frequencies.
 

Nodda Duma

Subscriber
Joined
Jan 22, 2013
Messages
2,686
Location
Batesville, Arkansas
Format
Multi Format
Alan, I’m quite familiar with this stuff as well...decades’ worth of optical and imaging system design familiar. Nyquist frequency corresponds to the rate at which you sample. I contest that you are ignoring the fact that, once you expand the system to include the second sampler you are not acknowledging that your Nyquist frequency has changed.

You’re artificially declaring Nyquist to be a value based on only part of the full linear system. It would be like inferring a system MTF based only on the MTF of the detector. No, you’ve only identified the detector MTF.

You really are showing an example of what's even imaging system designers view as aliasing (and I do understand the technical definition) whether you acknowledge it or not. Or sub-sampling, some people like to think of it like that.

I get where you’re going, though...This is a discussion of scanning at a higher resolution than film grain. In single point detection systems (or arrays with low fill factor), the rule of thumb is to design the optics to have a PSF ~1.3 times the detector size or pitch to keep the signal modulation of point sources to a controllable amount. So, in those systems the Nyquist is established to be at least 1.3x of the highest expected spatial frequency, then the signal/image processing (or point source tracking) algorithms are designed based on that valid assumption...i.e. don’t have to accommodate significant aliasing downstream.

Personally I scan at 3600 dpi to produce 600 dpi images (so 6x my final image resolution). About 15 years ago I did some calculations to determine at that scanning resolution the sampling-related maximum contrast modulation would be an acceptably undetectable amount — don’t recall the actual number, maybe like 5-10% — if reducing the image resolution to 600dpi or less. That factors in realistic assumptions of the reduced contrast when you’re down the MTF curve of a real lens, which you should also be considering innyour example.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Alan, I’m quite familiar with this stuff as well...decades’ worth of optical and imaging system design familiar. Nyquist frequency corresponds to the rate at which you sample. I contest that you are ignoring the fact that, once you expand the system to include the second sampler you are not acknowledging that your Nyquist frequency has changed.

You’re artificially declaring Nyquist to be a value based on only part of the full linear system. It would be like inferring a system MTF based only on the MTF of the detector. No, you’ve only identified the detector MTF.

You really are showing an example of aliasing whether you acknowledge it or not. That’s classic aliasing. Or sub-sampling, some people like to think of it like that.

I get where you’re going, though...This is a discussion of scanning at a higher resolution than film grain. In single point detection systems (or arrays with low fill factor), the rule of thumb is to design the optics to have a PSF ~1.3 times the detector size or pitch to keep the signal modulation of point sources to a controllable amount. So, in those systems the Nyquist is established to be at least 1.3x of the highest expected spatial frequency, then the signal/image processing (or point source tracking) algorithms are designed based on that valid assumption...i.e. don’t have to accommodate significant aliasing downstream.

Personally I scan at 3600 dpi to produce 600 dpi images (so 6x my final image resolution). About 15 years ago I did some calculations to determine at that scanning resolution the sampling-related maximum contrast modulation would be an acceptably undetectable amount — don’t recall the actual number, maybe like 5-10% — if reducing the image resolution to 600dpi or less. That factors in realistic assumptions of the reduced contrast when you’re down the MTF curve of a real lens, which you should also be considering innyour example.
Let's focus for a minute on the definition of the Nyquist frequency.
Alan, I’m quite familiar with this stuff as well...decades’ worth of optical and imaging system design familiar. Nyquist frequency corresponds to the rate at which you sample. I contest that you are ignoring the fact that, once you expand the system to include the second sampler you are not acknowledging that your Nyquist frequency has changed.

You’re artificially declaring Nyquist to be a value based on only part of the full linear system. It would be like inferring a system MTF based only on the MTF of the detector. No, you’ve only identified the detector MTF.

You really are showing an example of aliasing whether you acknowledge it or not. That’s classic aliasing. Or sub-sampling, some people like to think of it like that.

I get where you’re going, though...This is a discussion of scanning at a higher resolution than film grain. In single point detection systems (or arrays with low fill factor), the rule of thumb is to design the optics to have a PSF ~1.3 times the detector size or pitch to keep the signal modulation of point sources to a controllable amount. So, in those systems the Nyquist is established to be at least 1.3x of the highest expected spatial frequency, then the signal/image processing (or point source tracking) algorithms are designed based on that valid assumption...i.e. don’t have to accommodate significant aliasing downstream.

Personally I scan at 3600 dpi to produce 600 dpi images (so 6x my final image resolution). About 15 years ago I did some calculations to determine at that scanning resolution the sampling-related maximum contrast modulation would be an acceptably undetectable amount — don’t recall the actual number, maybe like 5-10% — if reducing the image resolution to 600dpi or less. That factors in realistic assumptions of the reduced contrast when you’re down the MTF curve of a real lens, which you should also be considering innyour example.
Let's focus for a minute on the definition of the Nyquist limit, and assume for simplicity that we are dealing with a simple sinusoidal signal with a frequency of F. The Nyquist limit is, by definition, sampling the signal at a rate of 2F. Any sampling rate greater than 2F is not undersampled since, by definition, Undersampling only occurs when the sampling rate is less than 2F.

In the example I gave in my first post the frequency (spatial frequency in this case) is 4/x_units. (The period of the wave form is 0.125 x_units, and since the frequency is 1 over the period, the frequency is 4/x_units.) Any sampling rate that exceeds 2*4/x_units satisfies the Nyquist sampling theorem for the function in the first figure. The sampling rate I used in the second figure was approximately 1.1*2*4/x_units. Since 1.1*2*4 is greater than 2*4, the example in the second figure satisfies the Nyquist theorem and is not undersampled.

There is nothing fancy here. I simply sampled the waveform at a frequency that is a little greater than the Nyquist limit and showed by example that you can't necessarily take the sampled result as a good representation of the original waveform. The reason is that using the sampled waveform directly as the representation of the original function does not satisfy the mathematical criteria for recreating the original waveform from the sampled waveform. (Of course, if you oversample to a large degree then the errors are small in assuming that the sampled waveform is a good representation of the original waveform, but this is not true if the sampling rate exceeds the Nyquist limit by only a little bit. In that case one must apply a mathematically correct method to reconstruct the original waveform.)

In the interest of collegial discussion let me strongly encourage you to do your own calculation: sample a simple sinusoid at a rate slightly greater than the Nyquist limit, i.e. slightly greater than 2F. When you plot the results I guarantee that you will see a beat pattern.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom