Why is perfect resampling impossible?
Reply #20 – 2004-08-28 07:18:20
The same goes if you use linear interpolation. The error is reduced, but you may still introduce audible distortion. [a href="index.php?act=findpost&pid=237521"][{POST_SNAPBACK}][/a] You're right, linear interpolation is not good for this. However, there is a very simple formula for the "perfect" interpolation filter: cf = sin(d*pi) / (d*pi) where cf is the coefficient for a given sample's weight and d is the distance from that sample to the desired interpolation position (in samples). You simply have to multiply every sample by its coefficient and add them up to get the interpolated value. The practical problem with this is that it doesn't taper quickly enough as you move away from the center, so in real-world applications you have to weigh the whole filter with a fixed-length windowing function. [a href="index.php?act=findpost&pid=237568"][{POST_SNAPBACK}][/a] Wow. Just add up a bunch of sinc impulses. It's so simple! But there's still the problem of what to assume the samples past either end of the song are. The obvious thing to do is assume they're 0, and only calculate the function for the samples actually contained in the file. However, in real life, the song will probably be followed by another song, which will tweak the frequencies slightly. Imagine a sine wave chopped into two files, where the file boundary is at a waveform max. On resampling the waveforms separately, Gibbs' Phenomenon will show its ugly head and give ripples on either end of the files. This is correct if the files are to be played completely separately, but incorrect if they will be played together. All in all, though, that would seem to be the perfect filter, barring a few pedantic real-world technicalities. Seeing as there's no way to know what will follow in anybody's playlist, it looks like the "most perfect" possible.