SoundExpert explained
Reply #13 – 2010-11-25 22:50:52
Just to be clear, your graph example shows grades where the default noise level (0dB) is quite objectionable, and reducing the noise makes it less and less so - correct? But with codec testing, you do kind of the opposite. The default noise level (0dB) is usually indistinguishable/transparent, or very nearly so, and to build the "worse quality" part of the curve (the part where people can hear the noise), you have to amplify the coding noise - correct? You right in principle, but real figures are slightly different because for quantitative estimation of differences introduced by device under test we use Diff.Level parameter. Diff.Level scale is shifted by 3 dB against the one on the graph (0dB on the graph = -3dB on Diff.Level scale). Yes, in order to build "worse quality" part diff.signal is amplified. Usually this part occupies the range between -10dB and -30dB for high bitrate encoders. Depends on test sample and particular encoder. Low bitrate encoders are tested as is without building the curves. People in this thread are saying the scale beyond "imperceptible" makes no sense. I'm not sure if that's true or not. What you're "measuring" (I put that in quotes - see later) is how far the coding noise sits below the threshold of audibility. (or above, if it's audible at the default level). If the second-order curve theory holds true, then to do this you only need sufficient points on the curve where the difference is audible. Points on the curve where the difference is inaudible don't help because it does become a flat line there. It seems you missed the point here or I missed your one. We "measure" exactly how far the coding noise sits above the threshold of audibility on the subjective scale (vertical). We can measure the amount of that coding noise with Diff.Level but in order to map it to subjective scale we need some curve above the threshold.There are several accepted ways to judge the threshold of audibility. I used this one... .................................................................................................... ........................... It seems to me that your method is far kinder to listeners. If your second order curve fitting can be justified, then it's a really neat way of finding the threshold of audbility (the cross over from 5.0 "imperceptible", to 4.9 "just perceptible but not annoying" on the usual scale) without even having to test at that (difficult) level. Yes, the method could be used for the purpose (if it's true). Multiple iterations around target threshold will be replaced with several easy listening tests, necessary for building "worse quality" part of the curve. Then you will need to extend it just a little bit. So far so good. What I'm less convinced of is the implication that a given codec has so much "headroom", and that this is a "good thing". e.g. on the range of content tested, at a given bitrate/setting, a given codec might be transparent even with the noise elevated by 12dB. It scores well in your test. Fair enough. IMO it would be wrong to draw too much from this conclusion. e.g. 1. It's tempting to think this means it's suitable for transcoding, but it might not be - it might fall apart when transcoded. 2. It's tempting to think this means that audible artefacts will be rarer (and/or less bad) with this codec than with one where the noise becomes audible when elevated by 3dB, but this might be very wrong - this wonderful codec which keeps coding noise 12dB below the threshold of audibility on the content tested might fall apart horribly on some piece of content that hasn't been tested. 1. Hard to say. Not only noise headroom matters, also how this headroom maps to vertical scale. And this depends on the curve. In any case codec with greater margin will be more suitable for transcoding than with lower (I do hope so). 2. I think this relates to normal listening tests as well.