Generation loss listening tests
Reply #22 – 2004-04-18 23:20:40
it's all a matter of how the information is removed. there's no way an encoder can tell what is _already_ quantization noise in the stream. What do you call noise here? I assume not something in analog equipment you'd call white noise, but something lossy codec ppl like to call when they talk about deviation from original. That is not something you'd need to encode, that is something that arises in both codecs. Whether it gets any worse is a question. If you make averaging of 0.7 upto integer, then you can add quantisation noise only once, after that its integer no matter how many times you repeat.it's not a matter of removing frequencies as such (though effectively it is), but rather storing the data with less precision. turning 3.141592653589793 into 3.14 for example (yes, that's from memory.. i'm a nutter). That might be good example. If previous codec did reduction of precision of pi to 3.14, then what has next codec any more to do with this? 3.14 is already reduced to closest allowable noise floor, for codec this should be incompressible data. If the signal ever was converted to analog and then again lossy compressed, then we could imagine degradation of compression because 3.14 is not precisely 3.14 anymore. But pure digital transcoding - there is no intrinsic noise added between codings. This means, there is no "less precision" anymore (given of course, similar transparency objectives). If it was waveform precision, then "stepness" could be perceived as white noise by encoder, but in subband encoders, reduced precision is about spectral components, which are reduced in precision only upto a limit.but what happens is that in a transcode, the encoder assumes each band to have no noise already. it will generate very similar masking thresholds and therefore give the most quantization error to the parts that are _already_ very inaccurate. By generating very similar masking thresholds second codec does not degrade signal further, but only changes nature of its noise somewhat. Some codec would reduce pi to 3.142, some would reduce to 3.141, some would reduce to 3.14. Thats not something of cumulating nature - it has limit, defined by most agressive codec. Giving 3.14 to codec that would pass 3.140 or 3.141 or 3.142 intact have zero impact on that 3.14 That encoder assumes each band to have no noise already is irrelevant, imo. Encoder has no idea of noise anyway, what it sees is either excessive precision and information, or lack of these.this means that if the original encode was only just transparent, then a transcode _cannot_ be transparent, unless it has extra information from the original encoding process. in the worst case the quantization noise will double per transcode, but in reality it's a little less. I don't see your conclusion following from the above. I could imagine several cases where this would be true, but I'm not seeing how this could be stated as universally true statement. My idea was that in best case the quantization noise will not change at all, and thus transparency will remain intact. This must depend on codec pairs used in transcoding. I can't see anything given here. Where is my error?