I would suggest you consider the possibility of the former, as we consider the possibility of the latter and try and understand exactly what you're trying to convey here.
I have presented an intuitive guess of probability of accuracy there to drive through what I am generaly talking about - The uncertainty of PCM records, with regards to potential sources having significantly higher bandlimits. As is often the case with Redbook standard PCM (downsampled from production formats) and others.It is the unknown frequencies above the samplerates implicit bandlimit which cause this uncertainty. We interprate the PCM record as though the frequnecies beyond the bandlimit must always have been flat, but in for example a production formats samplerate at 96kHz, they were not neccessarily flat (or else there would be little point in using those formats.)
Quote from: ChiGung on 08 October, 2006, 04:44:28 PM....snip vaguest part of my post....Preciselly that's why you must low pass the signal BEFORE sampling, otherwise the content above FS/2 will get mixed with the frequencies below FS/2 causing aliasing. It isn't rocket science.The straightforward solution to be able to detect "spikes" is to increase the sample rate, but that by no means imply that the sampling theorem is flawed in any way. It does imply that you must sample fast enough to have perfect reconstruction, at least from a mathematical point of view. In practice we all know that there's no ADC with a perfect delta dirac.I'm still waiting to see your MATLAB code proving everyone wrong.
....snip vaguest part of my post....
The example of the 'tekkie' locating the spike with a record too precisely was a straight forward one. The rebuttal of the example that the spike could indicate any position therefore it must indicate the true position securely was invalid for the reason that to ensure the precise positioning of the spikes peak with the true peak, all the other samples would have to be employed to refine that single detail, and they cannot normaly be employed just to do that as they have to convey their own detail as well.
It's not magic.
Yeah you did forget that. There is no downsample involved there, just a shifting of a record.
The formula described by KikeG (and others) does help, but it doesn't really satisfy me. It seems to coincide well with what I've computed (with 1/20000 sample delays being feasible) - I suppose that the 1/(fs*2^n) number is a theoretical limit rather than an upper bound on period, and depending on the vagaries of the upsampling/downsampling implementations, the real testable interval may wind up being much higher. (In theory, I ought to be able to get a 1/65536 sample delay working?)
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record. That would go, 44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'
oddly, here and elsewhere this formula is eclipsed with not insubstantial excursions into test pattern replications and counter intuitive textbook quotations.
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?
......... For example, if you have a bass drum and a high hat playing at the same time, most of the waveform excursion will be due to the bass drum (which will survive the 5.5kHz low pass filter), but the exact peak location will also depend on the "wiggles" in the waveform due to the high hat itself. These high frequency "wiggles" will be butchered by a 5.5kHz low pass filter, so the peak will move!
The only way you can be absolutely sure that it's a fair experiment, and that the low pass filter isn't significantly moving the peak by removing part of the signal that forms the peak itself, is to ensure that the low pass filter doesn't remove anything. - i.e. that the original doesn't contain any frequencies above 5.5kHz, or, to put it another way, that the downsampled version still satisfies Nyquist with reference to the original content.
QuoteFundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record. That would go, 44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'You are implying that these two things are directly proportional in a real and limiting sense, whereas, until you get to the absolute limit, many orders of magnitude better than the limits of human hearing, and many orders of magnitude better than anything we expect the system to achieve, the two things are completely independent.
QuoteA plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?In a suitably controlled version of your experiment (like mine) it wouldn't be a bell curve, it would be a single point! That would certainly be the case with the parameters you propose (44.1>192kHz).
So, in short, you want to run an experiment to see what effect a low pass filter has?
It is almost the same thing, although actualy doing the full downsample (as well its implied lowpass) investigates an attained quality of the full process, so would preferable for this charge for actual proof of subsample source/record ambiguity.
There are two issues at stake here. The first is the question of audibility of low pass filters. This has been dealt with here and elsewhere at great length and could be easily rigourously tested. Such a test has been done before, but a repeat including filters with non-flat phase responses might offer some new information.
The second is that you seem to doubt whether lowpass->sample->reconstruct can be shown to have the same effect as just the lowpass.
Without quantization, the theory says that the two processes are identical. If you wish to question this then a mathematical treatment will probably be necessary before your demonstration is accepted.
It's a fair enough experiment to ask an undergrad to do in order to practice computer programming and audio processing, but in terms of what it actually tells you about anything, all I can do is just sit here slowly shaking my head!
FWIW, given a random selection of audio signals (real or synthetic) the lower the low pass filter, the further the peaks will move (and, to say the almost same thing differently, the more peaks will completely disappear). The major stumbling block to doing the experiment exactly as you propose will be in determining when a peak has moved vs when a peak has vanished - or, to put it another way, tracking the "same" peak between different versions. Various possible attempts to do this "correctly" (and it will be near-impossible) will mean your results might be unexpected!
so now you've invented a new definition in order to prove the opposite. Your success here will not be down to your experiment (which will certainly show some change), but down to your strange definition of time resolution.
I also don't see the point in checking the positions of zero crossings or peaks after lowpassing. This won't prove anything except that if you further limit the bandwidth of a signal these points may move, vanish, or appear at places where there previously havn't been any.
Example signal:first two harmonics of a square wave: you'll get 4 peaks within a cycleafter lowpassing (only the fundamental left): 2 peaks within a cycle (it's a sine)Tell me what we have learned by that, ChiGung.
In the context of transform coding time resolution usually refers to the partition of the time/frequency plane that's done by a critically-sampled filterbank AFAIK. Without any noise shaping filter tricks this will effectivly limit how well we can control the quantization noise distribution in specific time/frequency regions only by choosing scalefactors. However, noise shaping filters can be and usually are used to improve this. (With "ANS" enabled Musepack can do better in terms of controling the noise's distribution in the frequency domain than what the filterbank suggests. With "TNS" enabled AAC can do better in terms controling the noise's distribution in the time domain than what the filterbank suggests.)