Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Does AAC enconding quality depend on absolute sound level? (Read 5675 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Does AAC enconding quality depend on absolute sound level?

Hallo!

I have a very specific question about the AAC codec (would be of course also interesting for other lossy codecs):
does the encoding/decoding quality depend on the absolute input audio level?

For example:
If I have a recording with a loudness of -35 dB RMS (or LUFS) and the same recording with -20 dB RMS.
Then I encode both and decode them again, and normalize both to a standard value (let's say -23 db RMS).
Does the louder one have a better quality then the more quiet one?

I think both should be equivalent, but I don't know the AAC internals ...

Thanks for any hints,
LG
Georg

Does AAC enconding quality depend on absolute sound level?

Reply #1
There would be a specific answer for each encoder (e.g. QAAC, Nero aacenc, FhG in Winamp, faac, ffmpeg)

AAC encoders operate on normalized values in floating point, so unless they have a fixed Absolute Threshold of Hearing level below which they cut off, there should be little repeatable difference in favour of one signal level or the other, though if there was a difference, it would probably favour the louder one. There WILL be some differences between the two encodings, just as there will be differences if you add a few samples of silence to the start, causing the signal to line up differently against the transform windows used internally.
Dynamic – the artist formerly known as DickD

Does AAC enconding quality depend on absolute sound level?

Reply #2
AAC encoders operate on normalized values in floating point, so unless they have a fixed Absolute Threshold of Hearing level below which they cut off, there should be little repeatable difference in favour of one signal level or the other, though if there was a difference, it would probably favour the louder one.


Thanks for your answer!
That's exactly the question, if one should make the signal louder before encoding or if it doesn't matter.
We have to encode the signal and send it over network (afterwards we decode it again) and don't want to manipulate the dynamic range if it's not necessary ...

LG
Georg

 

Does AAC enconding quality depend on absolute sound level?

Reply #3
It also depends on encoding strategy (VBR/CBR).
VBR can be more affected by the input sound level. More quiet input -> less bitrate.
As for QuickTime encoder, if you lower the input level by -50dB or so, the resulting bitrate will be dramatically different in case of TVBR mode, and you will get poor result.
You can use CBR to force encoder allocating bits on those quite input.

Does AAC enconding quality depend on absolute sound level?

Reply #4
VBR can be more affected by the input sound level. More quiet input -> less bitrate.
...
You can use CBR to force encoder allocating bits on those quite input.


Hm, that makes sense. Especially as we want to amplify quiet segments in an audio file (after enconding -> network -> decoding).

Thanks a lot!

Does AAC enconding quality depend on absolute sound level?

Reply #5
Which AAC encoder are you using, Georg, and at which bit-rate?

Chris
If I don't reply to your reply, it means I agree with you.

Does AAC enconding quality depend on absolute sound level?

Reply #6
Hallo Chris!

Which AAC encoder are you using, Georg, and at which bit-rate?


This is done on an iphone: so we are using Apple's AAC encoder (or wathever is used on an iphone) - ATM the bitrate is 150k (but that's just the tradeoff between network traffic and audio quality).

Have a nice weekend!
LG
Georg

Does AAC enconding quality depend on absolute sound level?

Reply #7
Moin,

I don't know the internals of the Apple encoder, but assuming you are talking about 150 kbps stereo, quality should not be an issue, so I suggest you just feed the encoder whatever PCM input you have, and do the loudness normalization after Decoding. You could even use a VBR coder at that bit-rate to make sure you get somewhat consistent quality. Apple's Constrained VBR (CVBR) should do just fine.

Chris
If I don't reply to your reply, it means I agree with you.