Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Upsampling, Oversampling, and DACs, oh my! (Read 20281 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Upsampling, Oversampling, and DACs, oh my!

Many years ago I was under the belief that upsampling DACs had the benefit of less stringent filter requirements as well as the ability to mitigate the effects of clock jitter (I don't care whether jitter is even audible, I am just interested in the technical merits/benefits of upsampling/oversampling).

But I'm now under the belief that ALL modern DACs always oversample internally the incoming signal anyway to realize these benefits. (maybe they did back then, but I was not aware of it, hopefully someone can enlighten me).

A good example is the Benchmark DAC1 series of products which I believe always oversampled the incoming signal to 110kHz for best performance. I don't remember exactly what metrics were being measured here other than their DAC performed better when using a 110kHz internal sample rate.

So I guess my question is, are they any added *TECHNICAL* (not necessarily audible) benefits for software or hardware to upsample a signal today during playback with respect to filtering requirements and/or less overall jitter impact on the signal?

In other words, let's say I have a device with some fancy new DAC and I tell it to upsample 16-bit/44.1kHZ LPCM to 24-bit/96kHz, do I reap any technical rewards within the DAC itself because of the larger word length and or number of samples or is it all immaterial because the DAC will oversample the signal anyway as part of its internal process chain?

I tried Google and the search function on this forum and I couldn't exactly find anything that addresses this specific topic (or my patience ran out trying to read longer threads that may answer the above). Most of it was on the audibility of upsampling instead of technical merits within modern DAC designs. There is the infamous 60kHz Lavry paper, but it was really addressing why 24/192 makes little sense and in fact, hurts more than helps.

Cheers!


Upsampling, Oversampling, and DACs, oh my!

Reply #1
No benefit.

Upsampling, Oversampling, and DACs, oh my!

Reply #2
Short answer: probably not.

Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. This filter might be designed to handle a certain sampling rate as input and thus certain sample rates might need to be upsampled before being fed into it. Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).


I don't even know if these exist anymore, but if the DAC lacks a reconstruction filter then upsampling makes sense. The staircase like signal caused by the zero-order hold causes harmonics (above the Nyquist freq) which can be pushed to even higher frequencies by using a higher sampling rate. Not discussing the audibility of harmonics above 22KHz (nyquist of 44.1KHz file) this could alleviate the fear of these harmonics falling into a range where they affect the sound (an instrument's timbre is also defined by its harmonics).
If a simple filter is used that is nothing more than a low-pass filter. This filter might be designed to fit the harmonics of the highest input sampling rate the DAC can handle, anything lower than that will have to be upsampled for it to work.


Edit: quote from wikipedia:
Quote
Oversampling DACs or interpolating DACs such as the delta-sigma DAC, use a pulse density conversion technique. The oversampling technique allows for the use of a lower resolution DAC internally. A simple 1-bit DAC is often chosen because the oversampled result is inherently linear. The DAC is driven with a pulse-density modulated signal, created with the use of a low-pass filter, step nonlinearity (the actual 1-bit DAC), and negative feedback loop, in a technique called delta-sigma modulation. This results in an effective high-pass filter acting on the quantization (signal processing) noise, thus steering this noise out of the low frequencies of interest into the megahertz frequencies of little interest, which is called noise shaping. The quantization noise at these high frequencies is removed or greatly attenuated by use of an analog low-pass filter at the output (sometimes a simple RC low-pass circuit is sufficient). Most very high resolution DACs (greater than 16 bits) are of this type due to its high linearity and low cost. Higher oversampling rates can relax the specifications of the output low-pass filter and enable further suppression of quantization noise. Speeds of greater than 100 thousand samples per second (for example, 192 kHz) and resolutions of 24 bits are attainable with delta-sigma DACs. A short comparison with pulse-width modulation shows that a 1-bit DAC with a simple first-order integrator would have to run at 3 THz (which is physically unrealizable) to achieve 24 meaningful bits of resolution, requiring a higher-order low-pass filter in the noise-shaping loop. A single integrator is a low-pass filter with a frequency response inversely proportional to frequency and using one such integrator in the noise-shaping loop is a first order delta-sigma modulator. Multiple higher order topologies (such as MASH) are used to achieve higher degrees of noise-shaping with a stable topology.
http://en.wikipedia.org/wiki/Digital-to-an...erter#DAC_types

Upsampling, Oversampling, and DACs, oh my!

Reply #3
Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. This filter might be designed to handle a certain sampling rate as input and thus certain sample rates might need to be upsampled before being fed into it. Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).


So this is what I was eluding to above. In this case, you are saying that no matter the incoming sampling rate of the signal, the reconstruction filter works optimally at a set frequency and thus will clock it to whatever it needs (if I feed it 44.1, it will oversample to the reconstruction filter frequency automatically).

Is my understanding of your reply above correct?

Quote
I don't even know if these exist anymore, but if the DAC lacks a reconstruction filter then upsampling makes sense. The staircase like signal caused by the zero-order hold causes harmonics (above the Nyquist freq) which can be pushed to even higher frequencies by using a higher sampling rate. Not discussing the audibility of harmonics above 22KHz (nyquist of 44.1KHz file) this could alleviate the fear of these harmonics falling into a range where they affect the sound (an instrument's timbre is also defined by its harmonics).
If a simple filter is used that is nothing more than a low-pass filter. This filter might be designed to fit the harmonics of the highest input sampling rate the DAC can handle, anything lower than that will have to be upsampled for it to work.


Wasn't this the original reason to upsample in the first place? To make anti-aliasing and low pass filters easier to implement?

BTW, isn't the draw of DSD in that oversampling allows for a simple filter to be used in the reconstruction at the sacrifice of quantization noise?

Upsampling, Oversampling, and DACs, oh my!

Reply #4
It's the simpler/cheaper solution. Cause the needed internal resolution is lower.
As far as I understand it's a way to enable less stringent specifications needed for the low-pass filter to be sufficient. From the Wiki quote: "Higher oversampling rates can relax the specifications of the output low-pass filter and enable further suppression of quantization noise". So it actually reduces the quantization noise.

The low-pass filter needed can be less than a 'brick wall' because the unwanted noise now lays further from the frequencies you want to preserve.

Upsampling, Oversampling, and DACs, oh my!

Reply #5
Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. This filter might be designed to handle a certain sampling rate as input and thus certain sample rates might need to be upsampled before being fed into it. Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).


So this is what I was eluding to above. In this case, you are saying that no matter the incoming sampling rate of the signal, the reconstruction filter works optimally at a set frequency and thus will clock it to whatever it needs (if I feed it 44.1, it will oversample to the reconstruction filter frequency automatically).

Is my understanding of your reply above correct?



Usually the device has some internal frequency its designed to run at (typically something around 10 MHz) and it just multiplies up to get there.  So if you give it 44.1k, it oversamples 256x.  If you give it 96k it oversamples 128x, and so on.

Upsampling, Oversampling, and DACs, oh my!

Reply #6
Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. [...] Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).

I was always suspecting this.
So I imagine that if the reconstruction filter is "not optimal" for a reason or an other there would be a benefit from upsampling first  from my player (using sox for instance) ? Does hardware handle upsampling as well as software solution like sox in general ?
Also what bothers me in sox, is that there's a bandpass setting (why not "full pass"  ) , I imagine that the hardware upsampling have it's own bandpass too ?

Upsampling, Oversampling, and DACs, oh my!

Reply #7
The asynchronous upsampling attenuates jitter and their research showed that ICs (at least the ones they use, but generally most ICs perform worse at 192 kHz) have their peak performance around 96 kHz. The 110 kHz results from their filter design keeping the passband free of aliasing and this reasoning:

Quote
For example, a 44.1 kHz D/A converter usually has a brick-wall filter with a transition band that begins at 20 kHz. An upsampling ratio of 22.05/20= 1.1025 will move this brick-wall filter upward to 22.05 kHz. 44.1 kHz x 1.1025 is 48.51 kHz. In other words 44.1 kHz should be upsampled to at least 48.51 kHz. If an upsampling ratio of 1.1025 is sufficient, 48 kHz should be upsampled to at least 52.8 kHz, and 96 kHz should be upsampled to at least 105.6 kHz.
"I hear it when I see it."

Upsampling, Oversampling, and DACs, oh my!

Reply #8
Isn't a brick-wall the theoretical ideal, and not something you can actually achieve easily? As far as I know all filters have a (big or small) transition region. The 'better' the filter the steaper and the more it will begin to look like a brick wall.



If you push the noise higher by oversampling, there's more chance it'll lay beyond this transition region, even with lower order (cheaper) filters.

Upsampling, Oversampling, and DACs, oh my!

Reply #9
The asynchronous upsampling attenuates jitter and their research showed that ICs (at least the ones they use, but generally most ICs perform worse at 192 kHz) have their peak performance around 96 kHz. The 110 kHz results from their filter design keeping the passband free of aliasing and this reasoning:


Thats an unusual design, and probably specific to some chip they're using.  Its pretty rare for a modern design to run at less than Mhz.

Isn't a brick-wall the theoretical ideal, and not something you can actually achieve easily?


Oversampling makes this basically irrelevant.  You use DSP to filter the oversampled signal.  Then you just use a simple analog low pass with a cutoff at a few hundred KHz for reconstruction.  So from a blackbox perspective, you can think of a modern DAC as having an essentially perfect brickwall filter at probably 95% of the Nyquist rate.

Upsampling, Oversampling, and DACs, oh my!

Reply #10
Yes, but isn't that exactly the technical benefit of oversampling I was trying to get at? Without it, the not-so-brickwall filters would be ...well not effectively brick-wall.



@extrabigmehdi
The SoX bandwith options changes the place the filter is placed. Just leave it at 95% (that's 95% of the nyquist freq of the input rate). It prevents aliasing.

Upsampling, Oversampling, and DACs, oh my!

Reply #11
Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. [...] Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).

I was always suspecting this.
So I imagine that if the reconstruction filter is "not optimal" for a reason or an other there would be a benefit from upsampling first  from my player (using sox for instance) ? Does hardware handle upsampling as well as software solution like sox in general ?


The upsampling in this case is just zero stuffing+lowpass filter.  So 128x means take one sample of the input signal, then 127 samples that just zero.  Then 1 more sample of input, and so.  Then a DSP filter low passes at just below Nyquist and passes it though to the DAC itself. 

If theres something wrong with the DAC's filter, doing it yourself might help a little, but generally these things are so simple there is little room for error. 

Upsampling, Oversampling, and DACs, oh my!

Reply #12
Upsampling is a step between the discrete signal and the conversion to a continuous (analog) signal. This is done by the reconstruction filter. [...] Other than this I can see no benefit in adding an intermediary upsampling step, since the reconstruction already is a form of extreme upsampling (to a signal with and infinite amount of points).

I was always suspecting this.
So I imagine that if the reconstruction filter is "not optimal" for a reason or an other there would be a benefit from upsampling first  from my player (using sox for instance) ? Does hardware handle upsampling as well as software solution like sox in general ?


The upsampling in this case is just zero stuffing+lowpass filter.  So 128x means take one sample of the input signal, then 127 samples that just zero.  Then 1 more sample of input, and so.  Then a DSP filter low passes at just below Nyquist and passes it though to the DAC itself. 

If theres something wrong with the DAC's filter, doing it yourself might help a little, but generally these things are so simple there is little room for error.


When you are talking about upsampling here, you are referring to the oversampling down by the recon filter, not during playback, correct?

Btw. for the record, I am also under the belief that there is a lot of value in oversampling during ADC to make aliasing less of an issue during reconstruction via DAC (I have got more samples in the recorded signal which means less aliasing products, correct?).

Thanks a lot guys for the feedback!

Upsampling, Oversampling, and DACs, oh my!

Reply #13
When you are talking about upsampling here, you are referring to the oversampling down by the recon filter, not during playback, correct?


Correct.

Btw. for the record, I am also under the belief that there is a lot of value in oversampling during ADC to make aliasing less of an issue during reconstruction via DAC (I have got more samples in the recorded signal which means less aliasing products, correct?).


I don't think it really matters what you believe here.  Virtually all devices are oversampling and have been for a very long time.  Regardless of what you like you'll probably end up with oversampling.

Upsampling, Oversampling, and DACs, oh my!

Reply #14
When you are talking about upsampling here, you are referring to the oversampling down by the recon filter, not during playback, correct?


Correct.

Btw. for the record, I am also under the belief that there is a lot of value in oversampling during ADC to make aliasing less of an issue during reconstruction via DAC (I have got more samples in the recorded signal which means less aliasing products, correct?).


I don't think it really matters what you believe here.  Virtually all devices are oversampling and have been for a very long time.  Regardless of what you like you'll probably end up with oversampling.


You are talking about during sample-and-hold of the ADC I believe. That clock rate is much higher then the final bit depth/sample rate file I use in the rest of the chain. Correct?

Upsampling, Oversampling, and DACs, oh my!

Reply #15
You are talking about during sample-and-hold of the ADC I believe.

Sample-and-hold is something that is used with successive approximation (SAR) ADCs. Sigma delta ADCs don't need or use it.


Upsampling, Oversampling, and DACs, oh my!

Reply #17
I don't even know if these exist anymore, but if the DAC lacks a reconstruction filter then upsampling makes sense.



They do, predictably, in the 'audiophile' market.  They're called Non Oversampling (NOS) DACs.  Google it if you need a dose of nonsense today.

Upsampling, Oversampling, and DACs, oh my!

Reply #18
Thanks for that. Indeed a good dose of nonsense. Let's go prehistoric and go back to not oversampling to make filtering out unwanted imaging extra difficult. Because using an analog filter and performing less operations must be better...right?  Nope. It just sets you back 15 years of development.
See Dan Lavry's insightful response here.

Upsampling, Oversampling, and DACs, oh my!

Reply #19
The upsampling in this case is just zero stuffing+lowpass filter.  So 128x means take one sample of the input signal, then 127 samples that just zero.  Then 1 more sample of input, and so.  Then a DSP filter low passes at just below Nyquist and passes it though to the DAC itself.


I've been thinking that it would more logical to repeat values rather than zero stuffing.
Because I tried to imagine the equivalent process for enlarging pics:
- you resize image using what photoshop calls "nearest neighbor" method (basically pixels are repeated )
- you apply some blur to remove the pixelated aspect (equivalent to low pass)

Also I thought why  sox or else would not use some interpolation method like lanczos , just like it's done for pics.

Upsampling, Oversampling, and DACs, oh my!

Reply #20
The upsampling in this case is just zero stuffing+lowpass filter.  So 128x means take one sample of the input signal, then 127 samples that just zero.  Then 1 more sample of input, and so.  Then a DSP filter low passes at just below Nyquist and passes it though to the DAC itself.


I've been thinking that it would more logical to repeat values rather than zero stuffing.


You can, but zeros work better. 

Also I thought why  sox or else would not use some interpolation method like lanczos , just like it's done for pics.


The method I described is similar to lanczos (and actually identical for some choices of low pass filter).

Upsampling, Oversampling, and DACs, oh my!

Reply #21
So I guess my question is, are they any added *TECHNICAL* (not necessarily audible) benefits for software or hardware to upsample a signal today during playback with respect to filtering requirements and/or less overall jitter impact on the signal?
It's possible to use a technically better upsampling filter in software than you'll find in the DAC itself, so of course the answer is yes.

It is not that difficult to get so close a perfect brick wall filter that the result is indistinguishable from a perfect brick wall filter, if that is what you want to do. No DAC is going to bother to do this.

It is pointless since it'll sound the same as something that isn't as close to a brick wall filter. Also, where the different filters are audible (generally a lot lower than 20kHz) something gentler than a brick wall filter is to be preferred (less ringing). This can and has been verified in the real world using lower sample rates where the differences are easily audible.

With brick wall filters especially, the phase is important...
http://www.hydrogenaudio.org/forums/index....showtopic=68524
...though there are only two positive ABX results for it being audible at 20kHz (i.e. when upsampling CD quality audio) in that thread. It's easily audible when the transition band is within the normal range of human hearing (i.e. well below 20kHz) but that's never true when oversampling decent quality audio.

Cheers,
David.

Upsampling, Oversampling, and DACs, oh my!

Reply #22
So I guess my question is, are they any added *TECHNICAL* (not necessarily audible) benefits for software or hardware to upsample a signal today during playback with respect to filtering requirements and/or less overall jitter impact on the signal?
It's possible to use a technically better upsampling filter in software than you'll find in the DAC itself, so of course the answer is yes.


Why is that? Simply computing resources?

Quote
It is not that difficult to get so close a perfect brick wall filter that the result is indistinguishable from a perfect brick wall filter, if that is what you want to do. No DAC is going to bother to do this.


I assume cost (i.e. number of parts)?

Quote
It is pointless since it'll sound the same as something that isn't as close to a brick wall filter. Also, where the different filters are audible (generally a lot lower than 20kHz) something gentler than a brick wall filter is to be preferred (less ringing). This can and has been verified in the real world using lower sample rates where the differences are easily audible.

With brick wall filters especially, the phase is important...
http://www.hydrogenaudio.org/forums/index....showtopic=68524
...though there are only two positive ABX results for it being audible at 20kHz (i.e. when upsampling CD quality audio) in that thread. It's easily audible when the transition band is within the normal range of human hearing (i.e. well below 20kHz) but that's never true when oversampling decent quality audio.


Alright but you are making the claim that oversampling is good no matter what, whether it be in the recon filter which is what every competently designed DAC does or in software (lets say you feeding  NOS DAC). That's how I read your comments.


Upsampling, Oversampling, and DACs, oh my!

Reply #23
've been thinking that it would more logical to repeat values rather than zero stuffing.
You can, but zeros work better.
To elaborate, having a single point for each sample and zeros for the rest approximates a Dirac pulse, which is desirable owing to intrinsic limitations of zero-order hold (ZOH):
Quote
The fact that practical digital-to-analog converters (DAC) do not output a sequence of dirac impulses, xs(t) (that, if ideally low-pass filtered, would result in the unique underlying bandlimited signal before sampling), but instead output a sequence of rectangular pulses, xZOH(t) (a piecewise constant function), means that there is an inherent effect of the ZOH on the effective frequency response of the DAC, resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency, corresponding to a gain of sinc(1/2) = 2/?). This droop is a consequence of the hold property of a conventional DAC, and is not due to the sample and hold that might precede a conventional analog-to-digital converter (ADC).
Sane discussions such as the post by Dan Lavry take this inherent roll-off of NOS DACs into consideration. Is part of the motivation for NOS DACs is out of some idea about them having beneficial effects upon frequency response? The (ahem) technique creates a roll-off that, at and therefore before the Nyquist frequency, is deeper that of a first-order filter but, because it follows the sinc function instead of being linear, does not roll off undesirable images nearly as much as even a first-order filter would. Either people prefer the attenuated treble for some reason, perhaps due to zomg harsh digitality or something—or they think they’re preserving a flat frequency response, when in fact they’re hammering down the audible range a bit more than a first-order LPF would but preserving a lot more undesirable content in the inaudible range.

Upsampling, Oversampling, and DACs, oh my!

Reply #24
Isn't a brick-wall the theoretical ideal, and not something you can actually achieve easily?


Brick wall filters can be approximated pretty well in the analog domain using ladder filters and elliptical filters, for example. Problem is they need a lot of component parts that need to be stable and precise.

Brick wall filters can be approximated far more easily, cheaply, and precisely in the digital domain.  Current technology is that an engineer can fire up Matlab or other common software, specify the filter characteristic they desire over a broad and highly useful range, and obtain filter parameters that can be easily and economically implemented in the digital domain.

This is one of the two or three major benefits of oversampling.