Originally posted by krsna77 I was wondering, is there a quantified amount, or formula, that each codec tends to increase levels by, and... if so, what is that value for Ogg Vorbis (and MPC, while we're at it)?
Reason I ask - if I encode to ogg, I would like to know the proper safe-but-sane normalization level geared for ogg. If it's not easily quantifyable beforehand, then at least is there a good method to detect and measure clips in ogg samples, so I can calculate the proper normalization amount myself to eliminate clipping for those tracks?
While I'm on it, will OGG / MPC ever have ReplayGain functionality, or something similar?
None of the codecs increase the volume. I explained this elsewhere on this board (don't ask me where though ).The problem is that sometimes the output of the decoder contains largers peaks (which is different from actual volume) than the original wave. This is a digital audio processing artifact.
If there is no clipping in the original wav, there should be no audible clipping in the Ogg decoder output either. The Ogg decoder has built-in clipping prevention. Basically, you do not need to worry about renormalization to avoid clipping.
Originally posted by krsna77 I have a tendency to believe you, however, I can think of one specific example (Alice in Chains - Dirt), that does clip very terribly, with nasty distortion, in OGG and MP3, until I normalize at ~0.84 (for MP3 - tho I use this value for OGG, its threshold may in fact be higher). Then it sounds fine (of course, I think this album really is pushed to the very limits).
Know of any good apps that can count clipped samples, so I can visually compare with what my ears are telling me?...