Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Help me put this guy back in his right place (Read 24348 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Help me put this guy back in his right place

Reply #75
Quote
As far back as the dawn of CD, there were companies recording at > 16 bits.

If that's true, I think it's quite probably it was just because it was the only way they had to achieve closer to true 16-bit performance.

Quote
Even today in the era of 24 bit DACs, the true resolution of many DACs is not 24 bits.


Well, AFAIK no DAC has true 24-bit performance. Real-world physics constraints make it impossible.

Quote
Thinking about it, the studio was recording at 20/48 and that was at the dawn of CD. The whole idea of recording with more bits is surely old hat.


Are you sure of that? First cd's I knew of having been recorded using 20 bits appeared many years after launch of cd format in 1982.

Help me put this guy back in his right place

Reply #76
Quote
As far back as the dawn of CD, there were companies recording at > 16 bits.  Even today in the era of 24 bit DACs, the true resolution of many DACs is not 24 bits. why should it have been any different  20 years ago. Thinking about it, the studio was recording at 20/48 and that was at the dawn of CD. The whole idea of recording with more bits is surely old hat.

For a DAC to count as x-bits, you have to be able to resolve 2^x different output levels. Noise may swamp the output, but you can average out the noise to reveal whether there's really anything there, or not.

For a DAC to be a good x-bits DAC, then each of those 2^x output levels needs to be in the numerically correct order, and each needs to be equally spaced from its neighbour.

The best current DACs are linear down to the 27th bit level.

"Good" 16-bit DACs from the dawn of the CD era couldn't even reproduce the 2^16 levels in the correct order, let alone make them equally spaced!


There may or may not have been "20-bit" devices in studios at this time - can you provide a reference?

Cheers,
David.

Help me put this guy back in his right place

Reply #77
Quote
The best current DACs are linear down to the 27th bit level.

However, that doesn't mean they are true 24-bit performance. Thanks to dither, narrowband measurements can give figures such as those, but those narrowband measurements are not representative of actual dynamic range performance. You know, using those narrowband linearity measurement procedures, you can achieve linearity levels up to the 18th or even 20th bit level using regular 16-bit DACs.

For me, true 24-bit performance means that the DAC can achieve a wideband dynamic range of around 6.03*24=144 dB. No existing DAC that I know can achieve those, simply due to electronics thermal noise floor. An ideal, true 24-bit DAC would.

Help me put this guy back in his right place

Reply #78
Quote
For a DAC to count as x-bits, you have to be able to resolve 2^x different output levels. Noise may swamp the output, but you can average out the noise to reveal whether there's really anything there, or not.

For a DAC to be a good x-bits DAC, then each of those 2^x output levels needs to be in the numerically correct order, and each needs to be equally spaced from its neighbour.

Sorry, if this has been answered in this thread before (it's a very long thread and i havn't read everything), but isn't this the reason, for this 1bit-stream-technology.  You don't have to mention, how accurate this bit is relative to your reference? And that it is easyer to spped up the sampling rate, than increasing the number of DAC-bits

Help me put this guy back in his right place

Reply #79
Quote
Quote
The best current DACs are linear down to the 27th bit level.

However, that doesn't mean they are true 24-bit performance. Thanks to dither, narrowband measurements can give figures such as those, but those narrowband measurements are not representative of actual dynamic range performance. You know, using those narrowband linearity measurement procedures, you can achieve linearity levels up to the 18th or even 20th bit level using regular 16-bit DACs.

For me, true 24-bit performance means that the DAC can achieve a wideband dynamic range of around 6.03*24=144 dB. No existing DAC that I know can achieve those, simply due to electronics thermal noise floor. An ideal, true 24-bit DAC would.

But if you will only accept that a DAC has true 24-bit performance if it can acheive both 144dB SNR and linearity beyond 24-bits, then you'll be waiting a long time!


Think of it this way... Consider a 4-bit DAC. Actually, compare 2 of them.

The first has a dynamic range of 90dB (so it can reproduce digital silence as, basically, silence), but it isn't linear at all - all the numbers are in the wrong order. So, it will make a perfect job of reproducing digital silence, but a useless job of reproducing anything else.

The second has a dynamic range of less than 24dB - so basically the last bit is always lost in noise (or full of noise, if we consider an ADC instead). However, the DAC is linear down to the equivalent of the 16th bit. So each level, when the noise is averaged away, is the corrent distance from each other level, at least to 1 part in 65535. This DAC can't reproduce digital silence, and adds noise to everything else. But beyond that noise, the reproduction is perfect.

The first DAC would sound like a bad guitar effects box. The second would sound like a very hissy radio.


So I think that (a) because we can't possibly make a DAC with 144dB dynamic range, and (b) because it couldn't possibly bring any benefit, and © we know that making a DAC linaer beyond the number of bits it's designed for will improve the perceived sound quality (see above extreme example!)... for these three reasons, I think that the linearity figure is much more important that the SNR figure.

It's better to improve both figures, but linearity is more important than SNR - at least at the levels we're talking about with modern 16 and 24-bit DACs.


Another thought: if you have a correctly dithered 16-bit recording, you can find singals within it which correspond to the 18th or 19th "bit". But these signals will be ruined by a 16bit DAC which isn't linear down to the 18th or 19th bit level.


Final thought: it's unlikely that 24-bit sounds better than 16-bit because of increased dynamic range. So, if we're to believe people who think that it does (and I know you don't believe them), we should look for another explanation. I think increased linearity in a 24-bit system is probably the reason. Even though, in theory, you can make just as linear a 16-bit system.

Cheers,
David.

Help me put this guy back in his right place

Reply #80
Quote
Sorry, if this has been answered in this thread before (it's a very long thread and i havn't read everything), but isn't this the reason, for this 1bit-stream-technology.  You don't have to mention, how accurate this bit is relative to your reference? And that it is easyer to spped up the sampling rate, than increasing the number of DAC-bits

Yes, that's the idea. But nothing is for free - if you have chance to read through the thread, you'll discover all the problems with 1-bit technology, and why state of the art converters are typically 3-6 bits, oversampled.

Cheers,
David.

Help me put this guy back in his right place

Reply #81
Quote
Are you sure of that? First cd's I knew of having been recorded using 20 bits appeared many years after launch of cd format in 1982.


Quote
There may or may not have been "20-bit" devices in studios at this time - can you provide a reference?



I can provide the reference, however it is only on paper so I will have to grovel to get to the information but I will get to it soon enough. I think it is documented by a reliable source, a loudspeaker manufacturer, but remember these were the days before the internet. 

KikeG you are right Tom Jung did use 20-bit recording to  try and achieve 16-bit performance, His first commercial 20-bit release was in 1991, so I am assuming here that from this period 20-bit recording equipment was available commercially. However I will get to that the relevant paper info soon.  Do these dates ring any bells?

Guys, help me here you guys are discussing linearity again without looking at those linearity graphs that require interpretation earlier on the thread. thanks in advance.
  • Edited for accuracy

Help me put this guy back in his right place

Reply #82
Quote
Guys, help me here you guys are discussing linearity again without looking at those linearity graphs that require interpretation earlier on the thread. thanks in advance.

That's because I haven't found an explanation of how the audio precision linearity test is carried out. We had an AP at uni, with all the manuals, but I don't have access to that now. I think they can be downloaded from their website, but you can do that as easily as me


I suspect they're using a decreasing amplitude sine wave, and looking at the amplitude of the output. I believe the plots you linked to show input level on the x axis, and output-input level on the y axis. If the output drops below the level of the input, then the system is undithered, and the input has dropped below the LSB and been lost. If it rises above the level of the input (as shown at the left hand side of the plots you linked to), then either it's being becoming swamped by noise (if the noise isn't averaged out - that's what I want to check), or there are extra harmonic distortion components present. As I haven't tried this kind of test (just looking at the amplitude of the output) I can't say what the two results you linked to actually mean. I usually examine the noise and harmonic distortion directly.


When I talk about linearity, I mean "does the ADC or DAC have a linear transfer function?". A 17-bit system has twice as many levels as a 16 bit system. An 18-bit system has twice as many levels as a 17-bit system. And so on... The extra levels in a 17 bits system will lie exactly half way between the levels on a 16-bit system. An 18-bit system will have 3 extra levels between each level of a 16-bit system.

A diagram would help. I've taken the region between two adjacent levels in a 16-bit system, and shown where the levels in a 17-bit, 18-bit, 19-bit and 20-bit system would fall:

Code: [Select]
16 17 18 19 20
__ __ __ __ __
            __
         __ __
            __
      __ __ __
            __
         __ __
            __
   __ __ __ __
            __
         __ __
            __
      __ __ __
            __
         __ __
            __
__ __ __ __ __


These "levels" are the analogue voltage levels above which the digital output will be the next highest value.

The above diagram assumes we have a perfect system. Let's assume the 16-bit system isn't perfect, and the levels don't quite fall where they should. Imagine that the top level shown is pushed about 1/3rd of a bit too low - i.e. the top left-hand level on that diagram is moved about 1/3rd of the way down. It would still fall within the same level on a 17-bit system, but it's actually jumped into a different (wrong!) level on an 18-bit system. so it's linear to 17-bits, but no further.


Conceptually, this explains linearity. In reality, most DACs aren't made using 2^16 or 2^20 discrete levels (that would be pure multi-bit technology with no noise shaping), and are swamped by noise anyway. However, in a typical oversampled DAC, it's still possible to average away the noise, measure the distortion, equate this to a bit-level, and conclude that it is linear to so many bits.

Cheers,
David.

Help me put this guy back in his right place

Reply #83
Quote
But if you will only accept that a DAC has true 24-bit performance if it can acheive both 144dB SNR and linearity beyond 24-bits, then you'll be waiting a long time!

I know, and I understand your explanations. But a 16-bit DAC with true 16-bit performance is supposed to have near 96 dB (94 dB with dither) dynamic range performance, and linearity levels quite above 16-bit. And there are such 16-bit DACs readily available. The fact that real 24-bit DACs can't perform accord to all these same requirements, is, in my opinion, not a reason for reducing requirements for saying a DAC has true 24-bit performance.

I mean, would you call a 16-bit DAC that has a poor 80 dB SNR or dynamic range performance, but linearity levels up to 100 dB, a DAC with true 16-bit performance? A good 14-bit DAC could outperform it in every sense, and would still be only 14-bit.

BTW, I did some empiric tests with Spectralab, and it resulted that, with an ideal DAC, using flat dither, a 65 K FFT and some seconds of averaging, it's possible to resolve signals up to 6 bit over the actual bitdepth used. With a 16-bit system, I could resolve signals whose level is just over -135 dB (22 bit level). With a 24-bit system, signals over -184 dB (30 bit level). As it has been said many times, ideally, using a infinite FFT length (infinitely narrowband analysis), infinitely small signals could be resolved with any resolution.

A quick & dirty law that I figured out according to these measurements would be:

R=-10 + 6.02 * nbits + 10 log NPointFFT

Where R is the max. resolution achievable, or the limit at which signals can't be resolved, or the limiting narrowband noise floor. I'm sure there is a more accurate mathematical law that explains this, this is just some empiric law that seems to work well for the range and kind of measurements I performed.

Help me put this guy back in his right place

Reply #84
Um, going off topic, but I think it's better than starting a new thread:

I just got some specs for the ADC and DAC of a processor I'm working on for a project:

Analog Input/Output

Analog Input:
24bit quantization, 96KHz sampling frequency。
SNR 90dB
Dynamic range: 90dB
Harmonic distortion: -80dB

Analog Output:
24bit quantization,96KHz sampling frequency
SNR 100dB
Dynamic range: 90dB
Harmonic distortion: -88dB

Headphone amplifier output in power:  40mW

Digital Input/Output
S/PDIF input and output
96KHz sampling frequency
USB Interface
Adopting USB1.1 standard
Standard Audio USB interface

The end product will be a DSP board (to be integrated into amplifiers) with an expected unit price of ~$100. Do you think the specs (SNR, dynamic range, THD) are good? Bad? Since we are on the topic of these 3 specs here...  Thanks ^^"

Help me put this guy back in his right place

Reply #85
It's not bad, but it's not spectacular either. Output performance is a little bit below 24/96 performance of cards such as the Audiophile or Revolution. Input performance is somewhat worse, compared with these cards.

Help me put this guy back in his right place

Reply #86
I think the target for this card is to be integrated in standalone hifi components...

Help me put this guy back in his right place

Reply #87
Quote
BTW, I did some empiric tests with Spectralab, and it resulted that, with an ideal DAC, using flat dither, a 65 K FFT and some seconds of averaging, it's possible to resolve signals up to 6 bit over the actual bitdepth used. With a 16-bit system, I could resolve signals whose level is just over -135 dB (22 bit level). With a 24-bit system, signals over -184 dB (30 bit level). As it has been said many times, ideally, using a infinite FFT length (infinitely narrowband analysis), infinitely small signals could be resolved with any resolution.

Sorry to drag this thread up - I missed your reply KikeG.

What you say is true, but I think you're suggesting it makes the whole thing bogus, whereas it doesn't. Of course an "ideal" DAC will give infinite resolution (given an infinitely narrowband analysis). That's the reason that measuring the actual resolution of a real DAC is a useful indicator.

However, you're also right that it seems silly to do this in the presence of comparatively large amounts of noise.

Of course, we can quote the noise figure and the linearity figure, but I think it would be useful to know which is the limiting factor...

I'd suggest a useful approach would be to introduce some psychoacoustics. (For once in this thread, we have some psychoacoustic knowledge which we can apply!).

Here is my idea: Perform an analysis using critical band filters (i.e. filters having the same selectivity as those in the human ear). These are quite wide, so will give much poorer frequency resolution than a stupidly long FFT. However, the output can still be averaged to remove noise, because the human ear seems to do this. You would need a limit on the amplitude accuracy of each spectral bin, but this could nominally be 1dB, subject to further tuning. This means any signal which is less than 1dB above the noise in this analysis is judged to be lost in the noise.


It should be possible to perform a "linearity" analysis of a DAC using this method. Quite simply, any noise will put a limit on this "linearity" measurement, because it will swamp the signal. So you could say that, at 1kHz, the DAC was linear to 27-bits, the noise was at the 20th bit level, and a perceptual analysis revealed the DAC was useful for human listeners up to the equivalent of 23-bits (these are made up figures).


There's one obvious flaw in this idea: the human ear has a much higher noise floor than some of these DACs (depending on the replay level), so this kind of analysis could be perceptually meaningless.


At the end of the day, I can't argue with your assertion that the ideal 24-bit DAC does not (and will not!) exist. However, the industry needs a useful way of talking about them (and expressing performance in a single, impressive number!) - maybe this perceptual analysis is the fairest and most useful approach.

Cheers,
David.

 

Help me put this guy back in his right place

Reply #88
If i am right the first 20Bit recordings appeared in 1987. Sting-Nothing Like The Sun was one of these recordings.
I have no reference found on the net but can remember back.

Wombat
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!