Autumn 2006 Listening Test
Reply #107 – 2006-08-16 00:42:22
I find it hard to believe that 6 kbps difference between iTunes and LAME can make such a difference in quality. Don't forget that the bitrate table is based on short samples (selected ones, who all tend to use more bits than average). It implies that iTunes 112@VBR average bitrate would logically be lower and therefore that the difference with LAME ABR bitrate is therefore higher than 6 kbps.And if iTunes VBR implementation is so good , why didn't it allocate more bits for hard parts like LAME did? It's indeed a good question. Someone said once as hypothesis that the limited bitrate's variations of this encoder is linked to the (old) iPod CPU clock issue. I tend to believe that this limitation simply correspond to the will of keeping the bitrate as close to the mentioned one (128@VBR with iTunes is indeed close to 128 kbps whatever the situation and it's probably what most users are expecting from this setting). N.B. what is interesting us at the moment is not to see if iTunes VBR is "so good" but rather if it isn't "poor" (in order to see if CBR would be more suitable than VBR). One advantage of iTunes trick (i.e. using the target bitrate as the minimal floor per frame ) is at least to prevent the encoder of big encoding mistakes. I expect from iTunes CBR and VBR to be very close each others (quality-wise) simply because both modes are always leading to the same bitrate (approximately). With LAME on the contrary you can expect from VBR to sound much better on critical material (some samples could reach 200 kbps with -V5) but also to sound poorer in some cases (it was the case with quiet samples - problem partially fixed with --athaa-sensitivity 1 switch and now close to be totally fixed with 3.98 alpha 6).Jesus, that's a lot of pre-tests. I am thinking of another possibility... Only thing that comes into my mind would be running two tests: one with all encoders using CBR and one with all encoders using VBR. However, this is rather complicated. We would have to use the same samples, but then the factor tester could have an impact on the results (different testers or even moods = different results). I have something more easy: let people having choice of testing either CBR-only settings or VBR-only ones. A poll could be an answer. If a majority of people are rather interested by 128 kbps CBR => test everything with CBR. Then no bitrate table fight. Only problem: people interested to optimize the quality for their DivX/Portable player... at moderate bitrate won't learn anything by a test focusing on... unoptimized settings (CBR instead of VBR). If a majority of people are rather interested by 128 kbps VBR => test everything with VBR and discard all encoder which doesn't offer either VBR coding or simply a ~128 kbps VBR setting. Simple, isn't it?