Multiformat 128 kbps Listening Test
Reply #786 – 2005-12-01 19:03:22
Yes, but if the difference is too high, you cannot compare the two encoders.[a href="index.php?act=findpost&pid=346960"][{POST_SNAPBACK}][/a] Ah I see what you mean -for the mean bitrate over the whole corpus rather than individual samples.[a href="index.php?act=findpost&pid=346961"][{POST_SNAPBACK}][/a] Exactly. [a href="index.php?act=findpost&pid=346965"][{POST_SNAPBACK}][/a] How about this, if all the encoders are close to 128 kbs for a large target corpus, and for the listening corpus are all somewhat above it, (resulting from a correlation between perceptual listening interest and codec-wide demand factor) Some codecs average behaviour might be to stick tighter to the target bitrate than others, so those codecs would produce encodes closer to the target bitrate than others. One codec that 'sticks out' of line with the listening corpus, might be just displaying its normal behaviour in that while it achieves the target over normal average material, its choosen discreet allocation varys more widely than the others, it might look like its unusualy high compared to the others, but if the others are all higher than the target over average, it might just be displaying more 'reactivity' to demand factor, and manipulating the samples to deliberately lower bit allocation for it alone, could be distortive to its performance. Im not sure yet, but the way to avoid such uncertainty of fairness, could be to try to make *all codecs meet a target bit allocation mean, accross the targeting material AND the sample material. An axiom that Id hold onto, is the more samples used the less danger of random variations surviving in the result, and the less reactive manipulation to the samples, the less danger of affecting a codec unfairly. ( )