Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Precision in the decoder (Read 4125 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Precision in the decoder

Hi,

Is there any advantage to use 64-bit precision in the decoder ? Is there a measurable difference between 64-bit and 32-bit output ? Thanks for any information or pointers.

Precision in the decoder

Reply #1
Quote
Is there a measurable difference between 64-bit and 32-bit output ?

It is probably measurable, but hardly detectable by your ears (or anyone else's)

Precision in the decoder

Reply #2
No consumer-grade DAC can possibly make those "differences" apparent in output signal. 32bit vs 64bit being possibly ABXable is another story, since 32bit is far overkill already (no consumer-grade DAC even makes full use of 24bit dynamic range), and people generally can't ABX 16bit vs 24bit.
Microsoft Windows: We can't script here, this is bat country.

Precision in the decoder

Reply #3
I'm sorry I wasn't clear. By 64-bit I meant the number of bits used to represent real numbers. There is a 32-bit (float) and a 64-bit (double) real number format. I wanted to know if using double real numbers in the internal calculations of the decoder made a difference in the quality of the decoded file.

Precision in the decoder

Reply #4
I don't think it is necessary to use 64bits precision for 16 bits audio.. 32 bits is enough.  64bits would require additional computational overhead..

At the very end, you have to verify your fixed point decoder with a standard floating decoder and determine the margin of errors.. According to some documentations, maximum errors of +/- 1 bit is achieveable..

Precision in the decoder

Reply #5
Quote
I don't think it is necessary to use 64bits precision for 16 bits audio.. 32 bits is enough.

agreed that there's little to be gained from normal circumstances with merely decoding as purpose, even with additional tampering there is little or nothing  to be gained.

Quote
64bits would require additional computational overhead..

Yes and no, it seems in practise that there's very little noticeable additional effort to be seen. Most CPUs can seemingly compute float64 with approximate computational effort equal to float32.

Quote
At the very end, you have to verify your fixed point decoder with a standard floating decoder and determine the margin of errors.. According to some documentations, maximum errors of +/- 1 bit is achieveable..

that's correct, there no more than 1 bit differences to be expected.

Precision in the decoder

Reply #6
It's for a floating-point decoder :) The problem is memory. I'd like to know if the result is worth the extra memory usage (9k for float, 18k for double).


Precision in the decoder

Reply #8
Sorry, I was typing my answer to wkwai when you replied  . I think I will use 32-bit float. Does +/- 1 bit makes an audible difference ?


Precision in the decoder

Reply #10
I will use 32-bit float then. Thanks a lot for the answers.

Precision in the decoder

Reply #11
I was refering to 32 / 64 bits fixed point..

Precision in the decoder

Reply #12
Quote
I will use 32-bit float then. Thanks a lot for the answers.


For portable player, power consumption is a great issue.. I am not very sure that there is any processor in the market that supports floating point operations and consumes power in the low milliwatts ranges.. enough to allow continuous playback for as long as 20 - 40 hours on a single AA battery..

Most processors that has a floating point unit gobbles too much power..

So fixed point decoder is the ideal solution.. as far as I knew it..

Precision in the decoder

Reply #13
Quote
Sorry, I was typing my answer to wkwai when you replied  . I think I will use 32-bit float. Does +/- 1 bit makes an audible difference ?

i read somwhere on HA that 1 bits in 16/44.1 is less the the noise in the recording studio
Sven Bent - Denmark

Precision in the decoder

Reply #14
According to some literature, the dynamic range of the human ear is more than 100 dB at its most sensitive region ( around 4 kHz ).. So, the +/- 1 bit error maybe audible or not audible depending on the source itself.. A 16 bits source only have a maximum range of 96 dB..

That is why there is the 24 bits PCM format ( 144 dB, which I think is overkill..)

Precision in the decoder

Reply #15
24 bits in a studio for recording is perfect, you cant have too much headroom,
no 24bit ADC will ever actualy give you  144 dB by the way, most that I know of are in the 110-120dB range.
But a DAC to play ready recorded music for consumer puposes, maybe 24bits
is overkill, and 16 bits seems in some cases a bit small?!
Maybe we need an in between standard like 20bits 

Precision in the decoder

Reply #16
Quote
Quote
Sorry, I was typing my answer to wkwai when you replied  . I think I will use 32-bit float. Does +/- 1 bit makes an audible difference ?

i read somwhere on HA that 1 bits in 16/44.1 is less the the noise in the recording studio


Thinking of it again, in most practical case, this +/- 1bit noise will fall below the quite threshold of the human hearing except at regions where the human ear is most sensitive.. (around 4 kHz) Then you have masking effect from neighbouring tones and noises.. which will most likely again masked out this noise..

Except for very special cases such as using a test tones at that region, this +/- 1bit noise will most likely not be noticeable..

Precision in the decoder

Reply #17
Quote
24 bits in a studio for recording is perfect, you cant have too much headroom,
no 24bit ADC will ever actualy give you  144 dB by the way, most that I know of are in the 110-120dB range.
But a DAC to play ready recorded music for consumer puposes, maybe 24bits
is overkill, and 16 bits seems in some cases a bit small?!
Maybe we need an in between standard like 20bits 

I agree that 20bits seemed to be ideal.. This will give a dynamic range of about 120 dB.. I wondered why 24 bits is selected over 20 bits? Economic reasons? Electronics reasons ?

 

Precision in the decoder

Reply #18
Quote
I agree that 20bits seemed to be ideal.. This will give a dynamic range of about 120 dB.. I wondered why 24 bits is selected over 20 bits? Economic reasons? Electronics reasons ?

20/24 bits is just a matter of the digital interface. Producing true 24 bit ADC/DAC is impossible in room temperature since the thermal noise way exceeds -144dB. In practise about 120dB is the best you can get which equals 20 bits. The signal to noise ratio is what generally counts and the amount of bits is usually irrelevant. In fact, 20/24 bit ADC/DAC with SNR worse than 96dB is no better than an equivalent 16 bit ADC/DAC.

For listening 16 bits (96dB SNR) is quite enough. In a studio situation there are real advantages in using more bits since you are likely to encounter unknown signals where you don't know exact levels, so it's good to have plenty of headroom. If you want 20dB headroom (for the occasional peaks which you may or may not encounter), then you need 20 bits to get practical 96dB SNR.

At the DAC side, with more than 16 bits you can then adjust the output volume digitally without touching the physical volume control all the time which is beneficial in mixing etc.