ABX'ed AAC 128 VBR (log posted). Angry :(
Reply #52 – 2012-12-02 02:50:02
Are you really willing to claim that your source model contains all the prior information available when we know that a signal is, say, the PCM from someone's CD collection rather than a stream of entirely random bits? Are you also claiming that you have objectively convincing evidence of this? You might as well be saying "the asymptotic optimality of LZ for Markov sources means that the GZIP'd size of the Library of Congress is so close to its Kolmogorov complexity that no compression algorithm will do significantly better." In both cases our source models are nice and useful but certainly wrong. The need for audio formats to be vaguely streamable, to require relatively limited amounts of memory for decode, and to decode using some reasonable level of power is a much more relevant limit on compression. You can of course come up with very clever formats that squeeze all kinds of redundancy out of bitstreams, but realistically people aren't going to be interested in using them. That's an excellent point. Vaguely streamable or extremely so in low-delay Opus' case. Some (CM-based?) wide-window redundancy models are certainly off-limits for this reason. Most songs are at their core extremely formulaic -- it is often through a distinctive spread of beat, repeating instruments, and somewhat "context-predictable" aspects that a song is born. Methods to take advantage of these longer-term redundancies would most likely kill streaming potential (solid archive anyone?) or lack speed and/or fidelity at today's computing power requirements.