Multiformat Listening Test @ 128 kbps - FINISHED
Reply #124 – 2006-01-16 10:35:59
I suppose it would have been good to include some samples that would have produced very small average bitrates, but because iTunes has a hard-coded 128 kbps low limit and Nero was used in ABR mode I think that would not been fair for the three other encoders. [a href="index.php?act=findpost&pid=357525"][{POST_SNAPBACK}][/a] I wouldn't say that's unfair. "Low bitrate" moments are a true part of real-world encoding, and it wouldn't be unfair to test them. A complete test should in reality include such samples. But with 18 samples only, it's impossible to represent each encoding situation. iTunes (bitrate floor at 128 kbps) and Nero Digital (ABR) both have a limited efficiency . As example, if you encode monophonic albums or albums including mono tracks, or low volume musical compositions, both AAC encoders will systematically waste a big amount of bitrate. I've recently tested it by encoding some Jazz oldies: LAME's bitrate (-V5 --vbr...athaa...) was around 85 kbps whereas iTunes was 128 and Nero Digital 130. Same for a very recent complete set of Beethoven's sonatas, recorded in the last years in stereo: ~100 kbps for lame and ~130 for AAC. The limited efficiency is the reverse side of the developer's choice. But on the other side, such limitation may have a positive effect on quality . Not for mono, but for low volume moments. LAME tend to produce ringing, which become easier to hear with a higher volume playback and sometimes really irritating after ReplayGain/MP3gain. That's worrying and I appreciate the limitation of both iTunes and Nero which are a warranty against psycho-acoustic failure or optimstic choice. Now what matter is how would react these encoders with low bitrate (corresponding usually to low volume tracks/moments). I performed recently a listening test with 150 classical music tracks including several samples corresponding to this situation. My results are available on the forum. From my experience, Vorbis has no problem to handle this situation; the bitrate doesn't sink to a ultra-low value (rarely less than 100 kbps) and there's no compromise in high frequencies (no ringing). Unfortunately, I can't say the same for LAME. It has real problems here, and low bitrate may correspond to low quality. Hence the usage of --athaa-sensitivity to lower this annoyance (which highers the bitrate on such situation).Also, that kind of samples tend to be even easier for the encoders, so possibly the quality differences would have been indistinguishable. Not necessary true. The Debussy.wav sample revealed in 2004 that such samples could lead to obvious artefacts for some encoders (LAME, Musepack) but not others