Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Normalization before encoding (Read 3277 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Normalization before encoding

this post could be in any lossy codec thread, vorbis isn't the matter.
I also don't want to speak about vorbisgain or mp3gain it's not the topic today

hypothesis 1) taken a 16-bit wave which is, from the start, perfectly scaled. I mean that signal reach inferior and superior boundaries of 16 bit signed integer at least once on the track whithout clipping.
The coder have the sharpest defined signal as input and so offer the best during the compression * ?... true or false

hypothesis 2) taken a 16bit wave which could be normalized by a factor 175 / 172. (each sample are divided by 172 and multiplied by 175 in float and then converted to integer)
This big ratio followed by an integer rounding distord the signal ?... true or false or not noticeably

So if I normalize I can use most of the possibilty of the coder but I could also alter the sound ...
What do you think about it ?

(And I don't ask your opinion about my awfull english, for this please complain to Miss *biiip* english teacher at *censured* high school )



* I have read some dumb posts where smart people noticed that when they reduced the volume by two before the encoding they get better compression ratio... but they forget that the quality was also twice lower. Almost a 8-bit wave ^^

Normalization before encoding

Reply #1
Quote
but they forget that the quality was also twice lower. Almost a 8-bit wave


how do do conclude that ?, lower volume doesn`t mean lower quality.

Normalization before encoding

Reply #2
Quote
this post could be in any lossy codec thread, vorbis isn't the matter.
I also don't want to speak about vorbisgain or mp3gain it's not the topic today

hypothesis 1) taken a 16-bit wave which is, from the start, perfectly scaled. I mean that signal reach inferior and superior boundaries of 16 bit signed integer at least once on the track whithout clipping.
The coder have the sharpest defined signal as input and so offer the best during the compression * ?... true or false

hypothesis 2) taken a 16bit wave which could be normalized by a factor 175 / 172. (each sample are divided by 172 and multiplied by 175 in float and then converted to integer)
This big ratio followed by an integer rounding distord the signal ?... true or false or not noticeably


The first case would be ideal, provided that no normalization or volume changes happened _beforehand_ to get such a 'perfect' signal.

I'm not sure what you mean with the second one. If I assume that you'd have to do the operation you describe to get a wave like (1), then yes, there will be a minimal and inaudible quality loss. But there is no point in doing this 'just' to get a signal as in (1). You can't 'add' information that isn't there by normalizing the wave.

Also both waves could _still_ clip after encoding, which is why normalizing as a whole is pointless.

Quote
* I have read some dumb posts where smart people noticed that when they reduced the volume by two before the encoding they get better compression ratio... but they forget that the quality was also twice lower. Almost a 8-bit wave ^^


The "dumb" one here is you. Reducing the volume by 2 will lose 1 bit of precision, leaving you with a 15 bit wave. The loss of quality is inaudible in normal circumstances.

Normalization before encoding

Reply #3
Quote
The "dumb" one here is you. Reducing the volume by 2 will lose 1 bit of precision


haha it's true I go back to my calculator to find the LOG key... sorry for that
hopefully I've studied math better than english.

/me slap himself with a bunch of thorns