Lossless codecs in frequency domain?
Reply #29 – 2010-11-23 22:15:57
If you could easily determine/identify/describe the underlying process (system) which generated any observed output ... Exactly. That's e.g. what speech coders try to do, which is why they sound quite good at low bit rates. For music, it's much harder.But your example is more like table-lookup is it not? ... That is kind of trivial, but less useful I think. Well, if you knew all audio signals ever produced on planet earth (at least), it would be an increadibly efficient codec. Every sound could be represented by an index and maybe a length, sampling rate, etc, giving you a compression ratio of 1000000:1 or so. The problem is, you cannot know every signal in advance, and your encoder and decoder software would be Terabytes large. So yes, quite useless.How does a time-variant order-16 LPC compare to the decorrelation in FLAC? Very similar. FLAC files use up to order 12, and update the filter every 4096 samples (in the current reference encoder, since analysis is block-based). If you would update the filter for every sample, you'll get a smoother/better filtering, but compression wouldn't increase since you'd have to transmit more filter settings over time.When you say that the remaining audible correlation is insiginificant for compression, is that because it is small, or because our imperfect man-made algorithms are not able to exploit it properly? Both I guess. It's always a side-info-compression-ratio tradeoff. With longer LPC filters, you can usually improve your prediction (up to a certain limit due to stationarity constraints etc. blabla), but you'll also need more bits to transmit the filter. Chris