And the point here is that, IMHO, LP transfers don't need much DSP at all. The vast majority of the editing I do concerns the removal of impulse noise, which involves edits to small isolated sections of waveform. The changes made to the waveform at those locations vastly swamps any changes to the quantisation noise that may result.The only "global DSP" I do to LP recordings is normalisation (nearly always), some modest EQ (very rarely), and broadband noise reduction (sometimes, and only in moderation). Apart from normalisation, these other operations again make a much bigger change to the audible nature of the music than the minor change in quantisation noise they cause (which I still maintain will remain beneath the vinyl noise floor).
But even normalization involves multiplying each sample by an arbitrary integer without regard to their values relative to each other, and rounding errors will certainly effect the resulting waveform and it's sound - I've heard this definitively. These tiny changes in the relative values of the samples might seem inconsequential in theory, but the ear/brain is a very sensitive 'device'.
Further, it's the very low level (ambience and such like) signals which are brought into prominence, and which in the original waveform are quantized with too few bits to sound realistic.
It depends what you mean by "working at more than 16-bits".You can convert to floating point, perform the operation, and dither back to 16-bits if you want - but that's pretty much what Cool Edit Pro is doing anyway when "working at 16-bits".
It's got nothing to do with "sounding realistic".With correct dither, it's about noise - pure and simple.If can be just four bits - it will sound perfectly "realistic", but with bucket loads of noise on top!
So, just to make sure I understand your position on this....Suppose we have a 24 bit recording of a vinyl LP. Consider two possible normalisation methods:1. Normalise at 24 bit resolution, then dither down to 16 bit for playback.2. Dither down to 16 bit, then normalise at 16 bit resolution.I take it you maintain there will be an audible difference between the two. Have I understood your position correctly?
There will definitely be a clear mathematical difference between files created with the two different methods, and yes, an audible one.
2. Dither down to 16 bit, then normalise at 16 bit resolution
Quote2. Dither down to 16 bit, then normalise at 16 bit resolutionplease,don't do that,read the post #208 here: http://www.hydrogenaudio.org/forums/index....134&st=200#
Quote from: RockFan on 30 August, 2006, 01:41:34 PMThere will definitely be a clear mathematical difference between files created with the two different methods, and yes, an audible one.OK, ENOUGH! You're violating TOS #8.
Enough is right! This is getting too pedantic and argumentative.If RockFan is right, then the quantization noise from the operations would be audible - with a blind test. And I'll give the benefit of the doubt here, because it is, perhaps, quite reasonable to listen to classical music that is highly amplified - 60db or more - to catch the end of some ambience or instrument resonance as it fades to the background noise. I've done it before. guru used a similar use case to poke holes in MPC's transparency. It's unlikely, but not completely outside the realm of possiblity, to catch the quantization error in such a situation in an ABX test.What I'd say is to take a very quiet music selection, amplify it to full scale at both 24-bit and 16-bit resolution, then quantize the 24-bit one to 16-bit. I would also strongly suggest highpassing at 40-50hz, at 24-bit resolution, to knock out the rumble in order to get higher gain during the normalization. (Or just crank your volume up really loud.) Then ABX the 24-bit-processed stuff to the 16-bit-processed stuff for 32 trials.I would offer to prepare the samples myself from my own classical LPs but I'm going on vacation for a week starting this afternoon.
Ok, before we start listening with our eyes
...i can host the sample 44.1k-16bit used for test or you can use some other extracted from cda,trust me,you will hear the noise.
there is the "Dither Transform Results (increases dynamic range)" setting. Was/is it checked?
I'll leave you audiophiles alone now.
A perfectly reasonable proposal, if one really feels that these issues need to be 'proved' via ABX.
But I don't really understand anyone taking issue with my stating that;1) if substantial normalization is needed with an 'under-recorded' 24-bit file, carrying it out at 24-bits will result in better use of the resolution of eventual 16-bit file.and;2) that any and all DSPs (including normalisation) are better applied while in 24 bits.
All I ever questioned was whether there is an audible difference in the context of doing LP transfers. You believe there is, I don't. We can both be happy with our positions. And when you're talking about LP transfers, all this stuff about quantisation noise is like arguing about angels on the head of a pin compared to the *really* important stuff, like getting the LP properly clean, using a good turntable and phono preamp, etc.
Possibly whats's been lost sight of is that I need to use *substantial* amounts of digital gain on some recordings due to the level-matchng issues I described. There is simply no reason for me to throw out all those bits before I do it.
If I feel the inclination, I might try a few DSP's such as HF boost to a recording sometime, in 24 and 16 bits, and see if the final 16/44 files (or CDRs) can be ABX'd.
Quote from: 2Bdecided on 30 August, 2006, 12:10:08 PMIt's got nothing to do with "sounding realistic".With correct dither, it's about noise - pure and simple.If can be just four bits - it will sound perfectly "realistic", but with bucket loads of noise on top!It has everything to do with 'realism'. 16/44 typically introduces several percent quantization distortion below -70dB (typically, played back through a good, linear DAC it reaches 6-10 % by -80dB, and more distortion than signal by -100dB).
I have to say, as something of a vinyl die-hard, I've always viewed the 'dynamic range' attributed to CD to be very generous.Typical distortion through a top-notch DAC is something like this;-60dB - 0.22%-70dB - 3%-80dB - 8%-90dB - 30+%-100dB - distortion = signal
The nature of this distortion is little discussed, but it needs to be understood that it is not like the evenly distributed harmonic distortion (typically mostly 2nd, some 3rd a little 4th and so on) that predominates in the analogue domain - it is randomly distributed 'quantization noise' and is extremely obnoxious at levels over a fraction of a percent.
edit >> we could also discuss CD's notional 'bandwidth' or 'time-domain resolution'. Perhaps not.
What do you think?
can't know what any particular individual hears
Quote from: RockFan on 30 August, 2006, 12:41:28 PMTypical distortion through a top-notch DAC is something like this;-60dB - 0.22%-70dB - 3%-80dB - 8%-90dB - 30+%-100dB - distortion = signalIt doesn't matter how many times you say this, or how many times some idiot audiofools say it on other boards, it doesn't make it true!
Typical distortion through a top-notch DAC is something like this;-60dB - 0.22%-70dB - 3%-80dB - 8%-90dB - 30+%-100dB - distortion = signal
And BTW matey, you'll probably find the use of insults, even if you think you're being clever and using them obliquely, is in breech of the TOS.