How to calculate the number of bits required to ensure that the quantization error for the tone is within the minimum masking threshold if i know power of the tone and masking threshold?
Hi,
Seems simple.
I would use: ceil(log2(tone_power/masking_threshold))
Hi,
Seems simple.
I would use: ceil(log2(tone_power/masking_threshold))
masking_threshold should be the minimum masking threshold in that segment?
Hi,
Seems simple.
I would use: ceil(log2(tone_power/masking_threshold))
masking_threshold should be the minimum masking threshold in that segment?
Yes. This might depend on the codec, but in general if you don't take the minimum value, the quantization noise might get perceptible.
Hi,
Seems simple.
I would use: ceil(log2(tone_power/masking_threshold))
masking_threshold should be the minimum masking threshold in that segment?
Yes. This might depend on the codec, but in general if you don't take the minimum value, the quantization noise might get perceptible.
Maybe you can give me some references where I can read more about it?
Maybe you can give me some references where I can read more about it?
My apologies, I can't.. because I won't be home until some time
Edit: Since your question was pretty essential, you can probably get away with any good book on psychoacoustics (but I don't know which ones by heart).
The measure we are talking about here is called "perceptual entropy", and the basic paper where it was described is:
J. D. Johnston, "Estimation of perceptual entropy using noise-masking criteria," in Proc. ICASSP, 1988, pp. 2524--2527.
Basically, for plurality of frequency lines, amount of the bits required to code the bitstream can't be lower than:
PE = SUM (0<i<-numlines) log2(Energy / MaskThr)
(where MaskThr is the masking threshold calculated by the psymodel)
PE is a theoretical minimum - and it can't be achieved in true codec (you need some side bits to represent the coding information, etc...)
So, for a real coding system, bits required to code a plurality of signals is most likely to be around:
BITS = p * PE + q
where p and q are codec dependent parameters.. For example, for MPEG-4 AAC:
BITS = 0.6 * PE + 24 * sqrt (PE)
This is not exact measure, of course - since huffman codebooks are not perfect, and signal statistic might vary from frame to frame, etc..
The measure we are talking about here is called "perceptual entropy", and the basic paper where it was described is:
J. D. Johnston, "Estimation of perceptual entropy using noise-masking criteria," in Proc. ICASSP, 1988, pp. 2524--2527.
Basically, for plurality of frequency lines, amount of the bits required to code the bitstream can't be lower than:
PE = SUM (0<i<-numlines) log2(Energy / MaskThr)
(where MaskThr is the masking threshold calculated by the psymodel)
PE is a theoretical minimum - and it can't be achieved in true codec (you need some side bits to represent the coding information, etc...)
So, for a real coding system, bits required to code a plurality of signals is most likely to be around:
BITS = p * PE + q
where p and q are codec dependent parameters.. For example, for MPEG-4 AAC:
BITS = 0.6 * PE + 24 * sqrt (PE)
This is not exact measure, of course - since huffman codebooks are not perfect, and signal statistic might vary from frame to frame, etc..
So if i have one tone (TF) and i know minimum masking threshold (MinT) i need
log2(TF / MinT) bits to quantize this tone?
log2(TF / MinT) bits to quantize this tone?
Yep.. actually, this should be the "theoretical minimum" - like with normal entropy.
You also have to add side data necessary to represent the compressed spectrum, like huffman table indexes or tables themselves - also, each huffman codebook has its own performance on different signal statistics, etc..
So, for a particular coding system (like MP3) the formula is little bit more complicated.