Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Nyquist double sample rate rule (Read 9603 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Nyquist double sample rate rule

The theorem proves that for sampling a frequency f, you must sample at a frequency greater than 2f.
Okay, but I wonder what happens when you approach the limit frequency, I cannot see how 2 samples per period could faithfully reconstruct the wave. The sampling could fall away from the peaks. How can the d/a converter reconstruct the correct amplitude/format if the samples are not in the peaks and valleys?
Or there really are issues for faithfully reconstructing frequencies and amplitudes near the limit?

Cheers

Re: Nyquist double sample rate rule

Reply #1
Depends how near. I'm not very good with the theory, so can't really explain it properly, but in practice 44.1 kHz doesn't cause problems for anything at least up to 20 kHz, in pretty much any modern DAC.
And it's correct that a wave of frequency f will be reconstructed with wrong amplitude unless it's aligned to have its peaks on the samples. (or, if there's any kind of upsampling before conversion, frequency f is usually gone after filtering)
a fan of AutoEq + Meier Crossfeed

Re: Nyquist double sample rate rule

Reply #2
It seems this has been discussed by uncle Arnie and others some years ago. But I didn't reach a conclusion. It seems there are indeed some problems with amplitude reconstruction when the sample rate approaches 2 per wave period (nearing fs/2).

https://hydrogenaud.io/index.php?topic=93588.msg895476#msg895476

Re: Nyquist double sample rate rule

Reply #3
A very interesting question which I also wondered about some time ago.
I wonder what happens when you approach the limit frequency, I cannot see how 2 samples per period could faithfully reconstruct the wave. The sampling could fall away from the peaks.
From a mathematical (or information theoretical) perspective, it's absolutely irrelevant whether you do or don't hit the waveform peaks during the sampling process. The problem "only" lies in the upsampling and/or D/A conversion during playback.

Using Audition, I created the attached example (48-kHz WAV file). Load that into the audio software of your choice. The first part (0-1 sec.) contains an accurately generated 11990-Hz sine wave. You'll notice that, when zooming in to the individual samples, that the waveform samples follow a joint high-frequency (11990 Hz) and low-frequency (10 Hz) pattern. Downsampling that file by two to 24 kHz, by taking only every second sample, will preserve the sine wave. Now, if you upsample that result back to 48 kHz without interpolation filtering (by just adding a zero sample after every sample value), you get the waveform in the second part (1-2 sec.) of the file. You'll notice the obvious 10-Hz amplitude modulation, and if you do a fine spectral analysis of that part, you'll see that the aliasing introduced by the simple upsampling is a sine tone at 12010 Hz. The problem now is to design, and implement, an upsampling anti-aliasing filter steep enough around the Nyquist frequency (12000 Hz) that it will a) fully preserve the 11990-Hz tone and b) fully remove the 12010-Hz tone. For that you'd need a very long 12-kHz lowpass filter, and I haven't seen such a filter to date (it would also cause quite some temporal ringing on transients, I would guess).

If you try to downsample a 12000-Hz sine wave, sampled at 48 kHz, to 24 kHz, then yes, the sine phase matters since you might "accidentally" sample the waveform at the zero crossings. That's, I guess why the theorem says that "you must sample at a frequency greater than 2f."

Chris
If I don't reply to your reply, it means I agree with you.

Re: Nyquist double sample rate rule

Reply #4
Emphasis is mine here:
I wonder what happens when you approach the limit frequency, I cannot see how 2 samples per period could faithfully reconstruct the wave. The sampling could fall away from the peaks. How can the d/a converter reconstruct the correct amplitude/format if the samples are not in the peaks and valleys?
Or there really are issues for faithfully reconstructing frequencies and amplitudes near the limit?

The following is imprecise, but conveys what I have a hunch that you ask about.
You sample at rate s. That is fixed through everything that follows.

* Suppose first that you have an infinitely long sine wave at a fixed frequency f<s/2.
Yeah sure a sample might "miss". But sooner or later - i.e. for many enough peaks and throughs, that is long enough duration - there will be enough samples for the sampled signal to know the original signal. And since by assumption we have infinite length, then in the end it is OK. (If it is finite length, you will get an error at the end. Let's say that is a different question, OK?)

* "enough samples" depends on f. Choose a larger f -> need more. And as f->s/2 from below, it approaches infinity. AT f=s/2 ... the sampling could be so out of phase as to pick up zero at each and every of the infinitely many times you sample, in which case: no cigar.

But whatever f<s/2, a sufficiently long signal suffices. That leaves: how much is "sufficiently long"? Since you need a bit of length to get the original signal right in the first place, we can measure whether "a signal long enough to be interesting" is long enough.

Behind all this there is a "bigger picture"; I won't call it a bigger "issue" because it is kinda tried and tested and in reality there is nothing that points towards it being a practical issue "in the wrong direction", rather it would ensure "overkill for practical purposes". The issue or non-issue is that "pure tone" audiometry - one continuous long sine wave signal (long enough that we think "infinite number of samples" is adequate) - is kinda the gold standard for measuring hearing. If the beginning of the signal were crucial to how the human ear perceived the whole long signal, then "everything" would have been wrong. But that does not sound likely given the physics of resonance (which comes at play in your ear) - and (am I right on this, anyone?) the absence of experimental evidence that it matters. At least "matters the wrong way", I mean.


Then what Chris points at is a practical issue: once you have frequencies above s/2, they behave bad. So you have one  problem with one solution (getting everything below s/2 right) and another problem with another solution (getting rid of everything above, which won't come out right).

Re: Nyquist double sample rate rule

Reply #5
Okay, but I wonder what happens when you approach the limit frequency, I cannot see how 2 samples per period could faithfully reconstruct the wave. The sampling could fall away from the peaks.

Samples landing at the same point every cycle is what happens at exactly double, which is why the sampling theorem says greater than double.  If you are 0.00001 Hz above double the highest frequency, then if you hit the zero crossing on this cycle, you will hit somewhere else on the next cycle, then somewhere new on the third, and so on.  Hence, no problem.

Re: Nyquist double sample rate rule

Reply #6
I believe the sampling theorem says that the sampling rate must be at least twice the highest-frequency component of the analogue signal
Or that the minimum sampling rate is equal to the highest-frequency component of the analogue signal

e.g. https://www.sciencedirect.com/topics/computer-science/shannon-sampling-theorem

"The sampling theorem guarantees that an analog signal can be in theory perfectly recovered as long as the sampling rate is at least twice of the highest-frequency component of the analog signal to be sampled. The condition is described as fs ≥ 2fmax" (my bold)

or http://www-users.math.umn.edu/~lerman/math5467/shannon_aliasing.pdf

"In other words, the theorem says that if an absolutely integrable function contains no frequencies higher than B hertz, then it is completely determined by its samples at a uniform grid spaced at distances 1/(2B) apart via formula (1)."

(Although the occasional text book says greater than e.g. one of my texts from the '60s. Another says "Any band-limited signal of bandwidth B Hz can be completely characterised by any 2B independent samples per second." Take your pick!)

But you need to look at Shannon's proof: e.g. look at the maths in https://ptolemy.berkeley.edu/eecs20/week13/nyquistShannon.html
It's been too many decades for me, but we can see a function of time with t between plus/minus infinity and we all seem to forget* that it takes time to sample a signal, in fact an infinite time to sample it perfectly. Of course we never have that long, so that statements such as "perfectly recovered" and "completely characterised" are not as true in the real world as we might like to think.

Let's call the minimum required sampling rate fmin and the actual sampling rate fs.

It seems to me that as we reduce (fs – fmin), the greater the sampling time (for any meaningful result) must be, and that as (fs – fmin) approaches zero, this required sampling time required approaches infinity.

My conclusion: once fs = fmin there is no meaningful result in the real world.



*Not Porcus!
Cheers,
Alan

Re: Nyquist double sample rate rule

Reply #7
Samples landing at the same point every cycle is what happens at exactly double, which is why the sampling theorem says greater than double.  If you are 0.00001 Hz above double the highest frequency, then if you hit the zero crossing on this cycle, you will hit somewhere else on the next cycle, then somewhere new on the third, and so on.  Hence, no problem.

How is this not a problem? There may be many cycles with inaccurate sample points, including some zero points.

The following is imprecise, but conveys what I have a hunch that you ask about.
You sample at rate s. That is fixed through everything that follows.

* Suppose first that you have an infinitely long sine wave at a fixed frequency f<s/2.
Yeah sure a sample might "miss". But sooner or later - i.e. for many enough peaks and throughs, that is long enough duration - there will be enough samples for the sampled signal to know the original signal. And since by assumption we have infinite length, then in the end it is OK. (If it is finite length, you will get an error at the end. Let's say that is a different question, OK?)

Dunno if I quite get it. But real world signals (i.e music) recorded digitally are not infinitely long, nor uniform enough to 'know the original signal' after a while.

Say you have 44.1KHz sampling and a 20KHz max frequency signal (not amplitude uniform). You are sampling at +120%, so each period will have 2.4 samples (shifting). The samples will most often than not fall outside of the peaks.
Frequency wise it's ok, but amplitude seems not to be accurate.

Thanks for all the answers, each one highlights important aspects of the subject.

Re: Nyquist double sample rate rule

Reply #8
Dunno if I quite get it. But real world signals (i.e music) recorded digitally are not infinitely long, nor uniform enough to 'know the original signal' after a while.
If a signal doesn't have frequencies above some frequency f, then it is infinitely long. Nyquist theorem cannot be directly applied to finite signals.

How is this not a problem? There may be many cycles with inaccurate sample points, including some zero points.
Not a problem, because you need infinite number of samples to reconstruct a signal. (Yes, real world ADCs/DACs cannot do this, so they cannot perfectly capture/reconstruct a signal. But they can be close enough.)

Say you have 44.1KHz sampling and a 20KHz max frequency signal (not amplitude uniform). You are sampling at +120%, so each period will have 2.4 samples (shifting). The samples will most often than not fall outside of the peaks.
Frequency wise it's ok, but amplitude seems not to be accurate.
Nope. https://www.youtube.com/watch?v=cIQ9IXSUzuM -- from 6:40 till 7:27

Re: Nyquist double sample rate rule

Reply #9
Nope. https://www.youtube.com/watch?v=cIQ9IXSUzuM -- from 6:40 till 7:27

Good video. I reckon if you draw anything between 2 samples you may end up with higher frequencies.
Also seems reasonable that for any 2 samples there`s just one solution wave to pass exactly through, given the original signal is perfectly bandlimited. Some minimal error might occur, but much less than I was thinking.
If the original signal has varying amplitudes between periods, the sample points would fall in slight different places right? (compared to an uniform sine wave)
I hope DACs are this good :)


Re: Nyquist double sample rate rule

Reply #10
Dunno if I quite get it. But real world signals (i.e music) recorded digitally are not infinitely long, nor uniform enough to 'know the original signal' after a while.
It is absolutely correct that the mathematics behind it refers to an idealized situation. "spherical cow in vacuum" is a joke with something to it.
But now, who the hell wants to listen to an eternal sine tone in the first place? In the lab, you are exposed to a limited-duration beep. That is how your hearing is measured, and that is a first yardstick at "how good is good enough?" I mean, if you use a sample rate of 200 Hz for music, it is going to suck. If we are using a sample rate so high that measurements (measuring humans!) indicate that there is nothing to worry about, then that is a "proof-of-the-pudding" that it is so high that the imperfections don't matter to the human ear. If "sampling at 40 kHz capturing 20 kHz minus a little bit" would not be enough, we could go to "44.1 kHz capturing 22.05 kHz minus a little bit" and then all your issues about the 20 kHz tone are solved: the problem is moved upwards, and sooner or later it will be moved into the audibly irrelevant range.

If you take a real-world signal with overtones from here to cloud nine, it will come out "band-limited" or mathematically speaking "wrong", that is nothing to even attempt denying that - what we do have, is a theory that enables us to say that "if you cannot hear a signal above this, then you shouldn't have to worry about a signal that is below as long as we made sure the error is above".

Quote
Say you have 44.1KHz sampling and a 20KHz max frequency signal (not amplitude uniform)
Oh, but changes in the signal that is not really captured by the simple theory - but small and slow changes in the signal are not immediately captured by the ear either. So if you change the signal and the ADC/DAC cannot immediately (like, in a 20th of a millisecond) capture it, it is fine with your ear as long as your ear takes time to capture it also.

If this were an actual problem, you would see lots of strange audiometric results from the lab. Take an old-fashioned noise box where a person moves a lever up for volume. If the perception of the "full-volume" 20 kHz tone were crucially dependent upon how smooth the person increases the volume over half a second, it would have been picked up. Or rather, replace "20" by "26" and it is clearer: nobody hears that loud unless at earsplitting levels.  (Yes they have found ears that detect that high, but at levels you are not exposed to in anything you want to listen to - that research has been focused on damage, not on sound quality.)
"Nobody hears it" is a very nice special case of "how you push the lever is irrelevant". If every pair of ears hears the same thing, then differences are inaudible; but if every pair of ears hears absolutely nothing of it, then that is surely "the same thing".


Now this argument of yours is a bit of something different:
Quote
You are sampling at +120%, so each period will have 2.4 samples (shifting). The samples will most often than not fall outside of the peaks.
We don't sample peaks! Here is how to see that: do the same with a 40 Hz bass note. Of course only one sample every now and then will hit the peak. For this note it is "evident from a picture" that it can be reconstructed. The beauty is that even with a 20 kHz note, 44.1 kHz will - hitting at peaks and outside peaks - be able to reconstruct it as a sine.

(That kinda is to say, it uses the "prior knowledge" that the original signal was a sine. What if it were a triangle? After proper ADC->DAC, it will come out as a sine still. Why is that error not a problem? Because it is more than an octave up. Again this uses the empirical findings that up there, errors are inaudible. Had the original signal been a 4 kHz triangle? Oh, the procedure would have caught the sine part, the first error term (represented as a sine), the second error term (represented as a sine) - and nobody would hear the third error term. Again the imperfections are surely there, but we mitigate them by sampling at such a high frequency that your ear doesn't hear, and the "such a high frequency" is determined by testing humans.)

Re: Nyquist double sample rate rule

Reply #11
The theorem is a proof, but it's not needed to know that you can't go higher in frequency. In very simple terms, for a linear sample rate, you need at least two wave samples to make a wave in the first place, which necessarily places the limit at half the sample rate.

Even if that frequency can be represented, the Nyquist rate is generally eliminated in the practice of best sampling because it's the least stable frequency. The Nyquist limit can only exist at zero or half phase, unlike any frequency below it; there would be phase distortion trying to keep it from a natural source. Mathematically, in sinc filtering, it's impossible to keep that highest frequency while at the same time avoiding any aliasing, and it's expensive getting accurately close to it as it is. There's a reasonable balance to hit a high bandwidth instead.

Re: Nyquist double sample rate rule

Reply #12
Samples landing at the same point every cycle is what happens at exactly double, which is why the sampling theorem says greater than double.  If you are 0.00001 Hz above double the highest frequency, then if you hit the zero crossing on this cycle, you will hit somewhere else on the next cycle, then somewhere new on the third, and so on.  Hence, no problem.

How is this not a problem? There may be many cycles with inaccurate sample points, including some zero points.

There are never cycles with inaccurate sample points.  If the waveform is zero when you sample, and you record a zero, then you have accurately recorded the value of the waveform at that point in time.  As long as you are greater than 2x the frequency you're looking at, you'll gradually move over and sample every point on the waveform, hence, no problem.  If you are exactly 2x, that doesn't happen, and you have no way to know if the reason you are getting the exact same value every sample is that the frequency is zero or if it is that the frequency is exactly 1/2 your sampling rate.  

The following is imprecise, but conveys what I have a hunch that you ask about.
You sample at rate s. That is fixed through everything that follows.

* Suppose first that you have an infinitely long sine wave at a fixed frequency f<s/2.
Yeah sure a sample might "miss". But sooner or later - i.e. for many enough peaks and throughs, that is long enough duration - there will be enough samples for the sampled signal to know the original signal. And since by assumption we have infinite length, then in the end it is OK. (If it is finite length, you will get an error at the end. Let's say that is a different question, OK?)

Dunno if I quite get it. But real world signals (i.e music) recorded digitally are not infinitely long, nor uniform enough to 'know the original signal' after a while.

The construction of your question assumes that the signals are infinitely long.  If they're not infinitely long, then you can't have pure frequencies, and it won't be possible to do things like sample an exactly 20 KHz waveform.  Since the sampling theorem works almost the the same for signals of  all lengths, usually people think about the infinitely long case so that they can have pure frequencies, but if you want to make the math harder to think about you can also consider a finite length recording.  In that case you'll get the same answer except that you'll find that you have to be very, very slightly more than 2x the highest frequency, whereas in the infinite case you can be infinitely close so long as you're greater. 

Re: Nyquist double sample rate rule

Reply #13
Quote
(That kinda is to say, it uses the "prior knowledge" that the original signal was a sine. What if it were a triangle? After proper ADC->DAC, it will come out as a sine still. Why is that error not a problem? Because it is more than an octave up. Again this uses the empirical findings that up there, errors are inaudible. Had the original signal been a 4 kHz triangle? Oh, the procedure would have caught the sine part, the first error term (represented as a sine), the second error term (represented as a sine) - and nobody would hear the third error term. Again the imperfections are surely there, but we mitigate them by sampling at such a high frequency that your ear doesn't hear, and the "such a high frequency" is determined by testing humans.)
Not exactly like that.
This error would be a problem, if the stuff above (sampling rate/2) isn't filtered away, because aliasing will occur and you'd get new stuff in the audible range that wasn't there.
However, if before sampling everything that's above (sampling rate/2) is eliminated, then this works. This is why any good resampler includes a lowpass filter, this goes without question. And any good ADC has to go with a filter too, otherwise any loud ultrasonic frequencies in the source signal (which, btw, can sometimes naturally occur, typical examples are jet engine noise or some metallic scratches) will be mirrored in the audible range and this never sounds good.
a fan of AutoEq + Meier Crossfeed

Re: Nyquist double sample rate rule

Reply #14
This error would be a problem, if the stuff above (sampling rate/2) isn't filtered away, because aliasing will occur and you'd get new stuff in the audible range that wasn't there.
Yeah, I glossed over that ("After proper ADC->DAC", that is surely glossing over stuff) - but there is a "historical issue, nowadays a non-issue" here: if you would "record to digital" at 44.1 kHz sample rate, you would have to use a steep analogue filter starting below 22.05. In that application, a "Yes the sampling theorem says you can" statement would be theoretically moot, as you could in practice not feed the sampler such things. You would employ a filter that would kill off a 22000 Hz tone before the sampler started ticking, and therefore had nothing on record, nothing to reconstruct, and who the hell cares how long that 22000 beep would have to be to come out right, when it isn't there?
Nowadays we simply don't need to have that analogue filter touch the 20-22.05 kHz range.

But that kinda adds to the story that sure you can if you just go a bit higher.

 

Re: Nyquist double sample rate rule

Reply #15
Say you have 44.1KHz sampling and a 20KHz max frequency signal (not amplitude uniform). You are sampling at +120%, so each period will have 2.4 samples (shifting). The samples will most often than not fall outside of the peaks.
Frequency wise it's ok, but amplitude seems not to be accurate.
Nope. https://www.youtube.com/watch?v=cIQ9IXSUzuM -- from 6:40 till 7:27
Good video indeed, Monty illustrates it very nicely. Btw, I just checked, with the WAV file from my post above, what happens when you use Audition's FFT filter to remove the 12010-Hz aliasing tone (and anything else above 12 kHz, for that matter). Using an FFT length of 16384 with a Blackman window and a cutoff frequency of 11994 Hz, the 10-Hz amplitude modulation in the second half of the audio file disappears, and you can recover the original 11990-Hz sine tone almost perfectly (including its original amplitude envelope, it's just 6 dB too quiet overall because the aliasing was filtered out). So the anti-aliasing filter acts not only as a low-pass filter but also as an interpolation filter, which is in line with what Monty shows in his video and what others hinted at here in this thread. But as noted, the FFT must get pretty long for such a steep filter, which is hard to integrate into audio equipment.

Chris
If I don't reply to your reply, it means I agree with you.

Re: Nyquist double sample rate rule

Reply #16
 The Nyquist theorem says that the original waveform can be recovered from the samples . It does not say that the row of samples will look like the original waveform.
 For a burst of HF the samples may be close to the zero points  but as the original waveform  satisfies the Nyquist criteria the burst will be present for long enough for the prior and subsequent samples to not be close to the zero points  i.e.  the  length of the burst also satisfies the criteria.
 It does not require clever DSP , an analogue switch and a low pass filter will suffice to demonstrate the process.

Re: Nyquist double sample rate rule

Reply #17
I tried to generate a near Nyquist sweep with Audition, foobar and SoX and all of them generate a waveform like this:
X

Even when the waveform (in 8kHz sample rate) is zoomed to sample level with the curved lines and dots visible the peaks are still modulating. Resample the waveform with the steepest resampler I have (SSRC) to 48kHz the waveform remains the same, and the modulation is clearly audible. The SSRC resampled file could have some audible artifacts after 10-15 seconds but before 10 seconds the tone sounds pretty pure to me apart from the audible modulation.

Before drawing conclusions could it be the signal generators of these software are not all that accurate in the first place?

Re: Nyquist double sample rate rule

Reply #18
sweep: 3998-4000 - and what did you expect?




Re: Nyquist double sample rate rule

Reply #19
OK. Using a white noise so there is no need to worry about the tone generators or Audition's sinc interpolated waveform display are accurate or not. Green is SSRC, white is SoX 99% with imaging allowed. The command line SoX tool allows up to 99.7% and could be somewhere between these two. Source sample rate is 8kHz and destination is 48kHz.
X

Re: Nyquist double sample rate rule

Reply #20
@bennetng It is unclear what are you trying to add to a two year old thread.

Your first attempt did 3998 to 4000 sampled at 8000. Well, 4000 does not respect f < 2*SR.
Next, you tried white noise, and shows a perfectly reconstructed noise up to 3990 with SSRC and up to 3970 with SOX with a setting that specifically allows this softer curve.

Bot things show that you can get nearer to the SR the harder you try and the bigger the amount of samples that are considered on the filtering stage.

Re: Nyquist double sample rate rule

Reply #21
@bennetng It is unclear what are you trying to add to a two year old thread.
Well, because it is still on the first page of this subforum.

Quote
Your first attempt did 3998 to 4000 sampled at 8000. Well, 4000 does not respect f < 2*SR.
Yes, but it is a sweep, so the screenshot still contains data below 4000Hz until the last few pixels.

Quote
Next, you tried white noise, and shows a perfectly reconstructed noise up to 3990 with SSRC and up to 3970 with SOX with a setting that specifically allows this softer curve.
Because the SSRC plugin does not have any setting, so I have to match the SoX plugin's setting to show an apple to apple comparison.

I was trying to figure out the accuracy issue / limitation is on the waveform display or the waveform generator. The FFT display on Audition is 65536 max so I can't really see things clearly.

Re: Nyquist double sample rate rule

Reply #22
For those who wonder how SoX looks like with imaging disabled, all other settings remain the same except the screenshot is not pixel perfect to the previous post. The curve is not steeper, just shifted to the left.
X

Re: Nyquist double sample rate rule

Reply #23
Resample the waveform with the steepest resampler I have (SSRC)
You can try sox's upsample, sinc, gain:
Code: [Select]
sox -r8k -c1 -n -b24 noise.8k.wav synth 1200 white norm -10
sox noise.8k.wav -r48k noise.48k.wav upsample 6 sinc -3995 -t 1 gain 15.5
X

Re: Nyquist double sample rate rule

Reply #24
Thanks. I am too lazy but is it steeper than rate -v or -u at 99.7?
rate is more practical as it supports arbitrary resampling ratio.

[EDIT] 8k resample to 44.1k with SoX command line rate -aub 99.7
X