That's exactly what Tigre is telling. Diff1 is the noise you add, Diff2 is the noise you drop. You can't tell the difference.
Unfortunately, this measurement is almost useless for audio quality assessment. It is useless because the measured value does not correlate with the perceived sound quality of the audio codec. In fact, the noise measurement gives no indication of the perceived noise level.
If the inaudible noise could somehow be removed from the measurement, then the resulting quantity would match human perception more accurately, since it would reflect what is audible.
Personally, I would be less worried by a difference file with audible music.
I know HA is mainly a psychoacoustic codec discussion board. And people get irked when theoretical approximation of quality beyond transparency levels are discussed. That's why the domain between lossless quality and transparent quality is not explored in the threads here. But I think observation (volume, frequency domain analysis and other possible methods) of the loss file seems to be a very useful information at that point. It could help us give an idea how the lossy codecs compare at the same bitrate beyond transparency limits.
I'd contend the one that minimizes the noise energy would still do a very good job (like Waves L2 type 2 dither algorithm). The amount of noise is therefore an objective criterion for the amount of loss if we disregard the human ear model. Please refer to my analogy to the dither algorithm noise earlier in the thread. I think it's more or less the same issue.
You might be surprised to learn that the amount of noise power it exhibits (just like Foobar2000's strong ATH noise shaping dither) is considerably (many times) higher than for standard dither with no noise shaping.
QuoteYou might be surprised to learn that the amount of noise power it exhibits (just like Foobar2000's strong ATH noise shaping dither) is considerably (many times) higher than for standard dither with no noise shaping.Are you sure? I think you are talking about the "Type 1" of Waves. "Type 2" focuses on low power.
I thought it as more like L2 Ultra dither, which I've read about.
However, (and you seem to know this already from later in your post!?), it puts most of the noise in ultrasonic frequencies or other areas where the ear is less sensitive and has a relatively high absolute threshold of hearing (ATH) so that it can reduce the noise in the most sensitive frequencies where the ATH is lower.
The default mode of L2 Ultra is "Type 1". "Type 2" is the one aimed for low power. You can choose either of them.
Noise shaping phase is separate from the dither and based on different algorithms AFAIK (that's at least how it is explained in its booklet). I don't know the specifics of Waves' plugins thus I don't know whether your inference works in this case.
Anyway, that's not the main discussion :D It was just an example.
Thus a lossy codec (again one which is not necessarily psychoacoustic) codec aimed at the minimal noise volume supplemented with noiseshaping should do a good job. Am I missing sth.?
Quote How could the sum of those frequencies be higher than the original, if none of the frequencies has its phase shifted? And exactly that happens during clippings.It is enough that the sum is higher at a given time in order to introduce clipping. The specrtal view only shows you the average level over the window analyzed.Remove harmonics from a square wave without changing the phase and it clips.
How could the sum of those frequencies be higher than the original, if none of the frequencies has its phase shifted? And exactly that happens during clippings.