Has decoder clipping ever considered as a factor that may have some effect to the listening test results?
Some of my bitrate test tracks produce extremely high peak values with iTunes VBR MP3.
This is the worst of them:
File Name : Garbage - Bleed Like Me.mp3
File Path : D:\test\iTunes_VBR128\Garbage - Bleed Like Me.mp3
Subsong Index : 0
File Size : 4 135 339 bytes
Last Modified : 2006-08-16 16:10:26
Duration : 4:01.934 (10669295 samples)
Sample Rate : 44100 Hz
Channels : 2
Bitrate : 137 kbps
Codec : MP3
Encoding : lossy
Tag Type : id3v2|id3v1
Track Gain : -9.61 dB
Track Peak : 1.642079
<ENC_DELAY> : 0
<ENC_PADDING> : 0
<EXTRAINFO> : VBR
<MP3_ACCURATE_LENGTH> : yes
<MP3_STEREO_MODE> : joint stereo
The source file looks like this:
File Name : Garbage - Bleed Like Me.ape
File Path : E:\test\Convert\LL\Garbage - Bleed Like Me.ape
Subsong Index : 0
File Size : 27 576 655 bytes
Last Modified : 2006-03-23 18:37:42
Duration : 4:01.867 (10666320 samples)
Sample Rate : 44100 Hz
Channels : 2
Bits Per Sample : 16
Bitrate : 912 kbps
Codec : Monkey's Audio (Normal)
Encoding : lossless
Tag Type : apev2
Embedded Cuesheet : no
Audio MD5 : C7D86603166482C328E9BF0015A27C79
Track Gain : -9.78 dB
Track Peak : 0.999969
<FLAGS> : 32
<VERSION> : 3.99