Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: why does normalizing a file make compression less efficient? (Read 2938 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

why does normalizing a file make compression less efficient?

Hi everyone,
This might be a totally stupid question but my DSP knowledge is rather limited so I'm a bit lost...
Simply put, let's say I have a file which was recorded at a quiet level, and I convert it to flac. Then if I boost the volume/normalize that file and compress again, the file size is quite a bit bigger. For large boosts the change seems to be quite dramatic. I guess this happens because any noise floor/quantization noise in particular would be stretched into the upper bits, and not rounded nicely, so in effect you're wasting precision, but that's where I kinda get lost trying to figure out if that makes sense or not haha. So I figured I'd just ask about it here. Is there some kind of processing that can raise the volume of a file without negatively affecting compression ratio on lossless codecs?

BTW I've read about replaygain though haven't tried it. I'm trying to stay away from it for compatibility reasons; I use flac files for a wide variety of purposes.

Thanks for your responses!

Re: why does normalizing a file make compression less efficient?

Reply #1
Is there some kind of processing that can raise the volume of a file without negatively affecting compression ratio on lossless codecs?
Multiply the amplitude by 2 ( = +6 dB volume increase), 4 ( = +12 dB), 8, etc.

 

Re: why does normalizing a file make compression less efficient?

Reply #2
Raising volume on a quiet track will make lossless compression worse since a lot of bits that were all zeros become non-zero and then have to be stored in the resulting file. 

You could use replaygain, which will store the files in their original form and then adjust volume after decoding them.  This has the advantage of being completely lossless, but it requires your software to support replaygain.

Re: why does normalizing a file make compression less efficient?

Reply #3
Is there some kind of processing that can raise the volume of a file without negatively affecting compression ratio on lossless codecs?
Multiply the amplitude by 2 ( = +6 dB volume increase), 4 ( = +12 dB), 8, etc.
it's not +6, it's +6.020599913279624 (nearly so, it's not a finite decimal; it's the result of 20*log10(2))
same with +12

it really does matter to do this accurately, otherwise it may not be reversible (will add distortion from rounding errors) and may be less compressible.
(also ideally you should verify that the inverse transform indeed gives back original data, because depending on software... some unwanted rounding may occur, like when you type this fractional number of decibels and it rounds it to too few fractional digits before using, GUI applications are sometimes annoying with this)
a fan of AutoEq + Meier Crossfeed

Re: why does normalizing a file make compression less efficient?

Reply #4
Quote
Multiply the amplitude by 2 ( = +6 dB volume increase), 4 ( = +12 dB), 8, etc.

Why, and in what manner, to you figure that changing the amplitude by some factor is different than normalizing?
I realize that the final amplitude may be different, depending on the factor used, but the isn't the process identical? (Yes, it is)

Quote
it's not +6, it's +6.020599913279624 (nearly so, it's not a finite decimal; it's the result of 20*log10(2))
same with +12

Do you have some evidence that your variation could produce a perceptible difference?

Re: why does normalizing a file make compression less efficient?

Reply #5
@AndyH-ha : I assume he is specifically talking about FLAC, which is able to detect if values have been bit-shifted (i.e. if the lowest bits are zero) and compress almost as if it hasn't been bit-shifted.

Re: why does normalizing a file make compression less efficient?

Reply #6
Do you have some evidence that your variation could produce a perceptible difference?
if the sample is quiet enough and is still quiet enough after amplification, yes, why not.
but since we're talking about file sizes, it doesn't even matter if this is perceptible, because it's obviously big difference in file size after FLAC encoding, therefore the difference is significant.
you can try it yourself easy with this sample:

# bigger file
ffmpeg -i sample.flac -af volume=6dB sample_6db.flac
# smaller file if using exact integer gain
ffmpeg -i sample.flac -af volume=2 sample_2x.flac

...and if you replace 6dB with 6.020599913279624dB, you get exactly the same output as with 2x multiplication (as expected). Not sure how many decimal digits are enough to produce the same result with 16 bit audio, and tbh I'm not really interested, I just wanted to show that 6dB is indeed far enough from 2x multiplication, even though it's a common misconception

a fan of AutoEq + Meier Crossfeed

Re: why does normalizing a file make compression less efficient?

Reply #7
@AndyH-ha : I assume he is specifically talking about FLAC, which is able to detect if values have been bit-shifted (i.e. if the lowest bits are zero) and compress almost as if it hasn't been bit-shifted.
of course, that's exactly what OP was asking about
a fan of AutoEq + Meier Crossfeed

Re: why does normalizing a file make compression less efficient?

Reply #8
Lossless compression uses predictors which may fail if relation between samples changes in a way that they weren't designed for.