Have we reached the limit of lossy codecs?
Reply #27 – 2008-04-23 02:09:55
However, more advanced coding algorithms exist mostly for speech signals. There, you're getting quite close to complete desynthesis (decomposition, parametrization upon encoding) and resynthesis (parametric speech synthesis upon decoding) of the signal. Theoretically, you could do the same with music. The problem is to find a suitable system (model) with enough input parameters and to find algorithms which would parametrize the music so that the system produces similar music when fed with such parameters. For speech there exist quite reliable models which allow for very high compression ratios (you use them in Teamspeak, Skype, GSM, etc.). Would you call any of those codecs "transparent"? All those codecs do not aim for sounding transparent, but "good enough for conversation". When digital synthesizers stormed the music world, researchers would work for over twenty years to restore the "organic", "chaotic" and "natural" feeling which analogue synthesizers had as a native feature. And now we should throw all that effort away? Sure, you could TRY to in addition model that as well..... the problem is that this probably is more effort than is required just to not "fall behind" new instrument and effects inventions. It would be way more efficient if the music itself were stored as parameters, and then mixed down when listening - but how likely is it that artists and labels are gonna do that? Here we notice the difference between theory and practice - in THEORY, all kinds of compression are possible in some imaginary ideal world......the practical implementation of most of them though, is unrealistic.