even the studio machines didn´t apply the curve always correct and had tolerances.
Measured difference of 0.2 dB is not bloody likely to be audible for anything but a test tone in the most sensitive frequency range of human hearing.
One thing i wonder how accurate the Denon original sweep really is and how it was created (...)
Are they still making CDs with pre-emphasis [...]?
If you want to do it digitally the best way is to process the file at 24 bits (or higher) then either leave it at the higher bit depth or dither it back to 16. If you process it at 16 bits you will definitely lose fidelity.
ANY operation on the data other than scaling by exactly a power of two will fill all 24 bits.
Quote from: pdq on 24 February, 2010, 06:51:44 AMANY operation on the data other than scaling by exactly a power of two will fill all 24 bits.But a parametric EQ shouldn't make it that much harder to compress? Or?
So long as sox does the internal math at a higher bit depth and then dithers to get back to 16 bits
You'll probably notice that your de-emphasized tracks will compress better than their original sources in both lossy and lossless formats.
Well does it?
@ DVDdoug and pdq: The argument on the number of bits is flawed as long as information is not added. Using a lossless compression, then in theory (not in practice, I know) only the information content of the file should matter. If the last 8 bits are filled with a pattern that is easy to predict, then it should add very little. In this case, there is nothing really "added" information-wise except for a relatively simple algorithm, and in theory it should be possible to compress it down to the size of the original file plus a description of the algorithm.
Besides, in this case we weren't even talking about scaling by a constant, but about applying an equalization. How many possible de-equalizations are we supposed to try in order to reduce the data to 16 bits?
That's nonsense. In "theory" that's what happens in nondestructive editing, where information about the order and parameters of each applied algorithm is kept. Never wondered why it is called "destructive" editing once such a log isn't kept alongside an original file? It is not only a practical limitation, but especially "in theory", the problem space, of what you expect a lossless compressor to accomplish, is too large for any thinkable search algorithm.For the rest, just reread DVDdoug's post #36. He pretty much nailed it.
As said, it's already all in post #36.
Digital filtering involves rounding. Rounding of 16 bit values at 24 bit precision will fill the full 8 additional bits with new information, even if a given filter was very simple. That new information might not be significant, i.e. noise, but that's up to a lossy encoder to decide. A lossless encoder has to preserve it.
This does not make sense. A filter does not create information (look up Shannon).
If the 8lsb can be "created" as a function of the last N input samples, and this function is held constant over a file, then a predictor looking for autocorrelation can in principle find that function (or the convolution of the original source spectrum and the filter). Once you have that function you can do predictions and transmitt only the prediction residue.