Observing the loss.
Reply #27 – 2003-07-14 16:27:51
Unfortunately, this measurement is almost useless for audio quality assessment. It is useless because the measured value does not correlate with the perceived sound quality of the audio codec. In fact, the noise measurement gives no indication of the perceived noise level. Absolutely. This issue seems clear with loss observation and has been pointed out in the thread before. But consider the set of all lossy audio codecs (not only psychoacoustic codecs but also ones like WavPack lossy including the ones that only exist theoretically but are yet unknown or unimplemented). I'd contend the one that minimizes the noise energy would still do a very good job (like Waves L2 type 2 dither algorithm). The amount of noise is therefore an objective criterion for the amount of loss if we disregard the human ear model. Please refer to my analogy to the dither algorithm noise earlier in the thread. I think it's more or less the same issue. I know HA is mainly a psychoacoustic codec discussion board. And people get irked when theoretical approximation of quality beyond transparency levels are discussed. That's why the domain between lossless quality and transparent quality is not explored in the threads here. But I think observation (volume, frequency domain analysis and other possible methods) of the loss file seems to be a very useful information at that point. It could help us give an idea how the lossy codecs compare at the same bitrate beyond transparency limits (which becomes important in transcoding, DSP applications, archiving, ...) .If the inaudible noise could somehow be removed from the measurement, then the resulting quantity would match human perception more accurately, since it would reflect what is audible. That's precisely what the noise-shaping algorithms are designed for at the moment. They shift the noise to unaudible spectrum at the expense of increasing the noise volume slightly. Hence couldn't adjusting to the human ear model issue be left to the noise shaping phase? Thus a lossy (again one which is not necessarily psychoacoustic) codec aimed at the minimal noise volume supplemented with noiseshaping should do a good job. Am I missing sth.?Personally, I would be less worried by a difference file with audible music. Umm but theoretically, the more the loss is correlated to the original (i.e. sounds like it), the more information you'd be losing out of your original. Or at least that's how it seems to me, again disregarding the issues about the human ear-model and trying to find an objective(not human based) criterion for lossy encoding quality.