This was discussed many time.
It's a non-sense. It's just like encoding a DVD in a 700 MB Divx, cutting a short part and difficult part of the video stream, and then claiming than we should compensate the located bitrate inflation in order to be fair with CBR encoders.Why people prefer VBR on CBR? Simply because it gives a better quality? And why is quality better? Mainly because VBR encoder can grant more bitrate on difficult part.
Don't forget that Roberto's test used difficult samples. Not killer one, but difficult. Therefore, average bitrate for VBR encoder is logically bigger than the 128 kbps targetted. Make another test, with easy samples. CBR will stay at 128 kbps, and VBR will reach the 100 kbps floor.
Problem(s) with this interpretation:The quality settings were chosen to give similar bitrates (~128kbps) when encoding a wide variety of music. The samples used for the test, OTH, were chosen in order to be difficult to encode, otherwise it would have been hard to get meaningful results at all.Another point is that your calculation is based on a linear relation ship between bitrate and quality which isn't the case most likely.
On your second point...could an alternative method be used in the place of a linear calculation? I'm sure there is a lot more to that relationship than the little I understand of it, but it would seem that there could be some valid way of calculating a composite value if it's a feasible method at all.