Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: What is "time resolution"? (Read 116005 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

What is "time resolution"?

Reply #50
I would suggest you consider the possibility of the former, as we consider the possibility of the latter and try and understand exactly what you're trying to convey here.

Deal
no conscience > no custom

What is "time resolution"?

Reply #51
I think that it is being claimed, almost unanimously that,  'conditions' (which would indicate 'events' or 'energy spikes') precise location in a PCM record, accurately informs us of their precise location in real time.
I have been trying to explain how this is untrue. That in real time, rather than the sampled approximation of it, the precise time of any condition can differ from what is idealy indicated by the record -by up to a sample interval (or maybe half a sample interval - I am not certain of the amount)

The correlation between the records indication of timing of instantaneous conditions (such as level= x) and the actual timing of indications would be something like this (loosely from intuition):

Indicated time is within 1 sample period of real time: odds ~1/1
Indicated time is within 1/2 sample period of real time: odds ~1/2
Indicated time is within 1/4 sample period of real time: odds ~1/3
Indicated time is within 1/8th sample period of real time: odds ~1/4

I have presented an intuitive guess of probability of accuracy there to drive through what I am generaly talking about - The uncertainty of PCM records, with regards to potential sources having significantly higher bandlimits. As is often the case with Redbook standard PCM (downsampled from production formats) and others.

It is the unknown frequencies above the samplerates implicit bandlimit which cause this uncertainty. We interprate the PCM record as though the frequnecies beyond the bandlimit must always have been flat, but in for example a production formats samplerate at 96kHz, they were not neccessarily flat (or else there would be little point in using those formats.)

The example of the 'tekkie' locating the spike with a record too precisely was a straight forward one. 
The rebuttal of the example that the spike could indicate any position therefore it must indicate the true position securely was invalid for the reason that to ensure the precise positioning of the spikes peak with the true peak, all the other samples would have to be employed to refine that single detail, and they cannot normaly be employed just to do that as they have to convey their own detail as well.
Revise the lumpy mattresse methaphor. It is not a silly one.

The situation I have been pointing out is very complex with great subtleties and many gotchas involved. I am very familiar with the technologies limitations because I have spent a great deal of time pondering it and programming for it, particularly over the past year. I have for example completed my own frequency analyser from first principles without reference to any text books or reported methods. It produces very fine output and the mechanics of it are now being employed in my own compression codec. Unfortunately it will be along time since Ill be able to talk about it in detail in public. But you see, (unless im lying  ) I am not wishfully lecturing in an area which I have no experience. I dont believe I have said anything unconfirmable or insensible in this thread (at least which is not transient to the arguement -everyones human), there may be certain mistakes or difficulties in expression to get caught up in, but the subject is outside of many peoples familiarity zone -even people involved here. If anyone reads my explainations open mindedly, and links the parts of the explainations which they can interprate and skips the bits they cant, there is a good chance they will acknowledge an under-reported aspect of PCM 'time resolution'.

I will leave this thread now, confident in the explaination Ive invested here.
If it really is a silly as everyone seems to think it is, I guess it will end up in the recycle bin but I do believe that it would be an uncommon shame on HA.org to do so. 

Sincerely.
twerpy' smartass' fat-tongued, Cheegunge
no conscience > no custom

What is "time resolution"?

Reply #52
I have presented an intuitive guess of probability of accuracy there to drive through what I am generaly talking about - The uncertainty of PCM records, with regards to potential sources having significantly higher bandlimits. As is often the case with Redbook standard PCM (downsampled from production formats) and others.

It is the unknown frequencies above the samplerates implicit bandlimit which cause this uncertainty. We interprate the PCM record as though the frequnecies beyond the bandlimit must always have been flat, but in for example a production formats samplerate at 96kHz, they were not neccessarily flat (or else there would be little point in using those formats.)


Preciselly that's why you must low pass the signal BEFORE sampling, otherwise the content above FS/2 will get mixed with the frequencies below FS/2 causing aliasing. On a bandlimited signal there's no "spike" that can not be represented in the sampled version, even if it lies within 2 samples. It isn't rocket science.

The straightforward solution to be able to capture your so called "spikes" is to increase the sample rate, but that by no means imply that the sampling theorem is flawed in any way. The theorem does imply that you must sample fast enough to have perfect reconstruction, at least from a mathematical point of view. In practice we all know that there's no ADC with a perfect delta dirac.

I'm still waiting to see your MATLAB code proving everyone wrong.

What is "time resolution"?

Reply #53

....snip vaguest part of my post....

Preciselly that's why you must low pass the signal BEFORE sampling, otherwise the content above FS/2 will get mixed with the frequencies below FS/2 causing aliasing. It isn't rocket science.

The straightforward solution to be able to detect "spikes" is to increase the sample rate, but that by no means imply that the sampling theorem is flawed in any way. It does imply that you must sample fast enough to have perfect reconstruction, at least from a mathematical point of view. In practice we all know that there's no ADC with a perfect delta dirac.

I'm still waiting to see your MATLAB code proving everyone wrong.

Another one tries to wriggle under the full case that has set on a plate for you and seasoned liberaly.
Dont say Im talking about detecting 'spikes' just after I have just described the uncertainty of detecting the realtime location of any 'conditions' -such as spikepeaks,  level values or waveform gradients -any conditions that could be locateable in an instant of time indicateable by a PCM record.

Ive never gone near MATLAB because I dont need it, I code in low level java/c syntax, and what I code ends up working sir.

Trying to drag it back to who was right or wrong like that is pathetic.
no conscience > no custom

What is "time resolution"?

Reply #54
Fine forget about the code and do try to provide mathematical proof of your statements instead of blabbering. I'm sure a person that names himself smart ass will be able to provide such proof.

FYI, java is NOT a low level language, it is actually FAR from being one. C is closer but it isn't considered low level either, the actual term to describe C is middle level.

What is "time resolution"?

Reply #55
If I understand you right, you are saying that the time when the signal reaches a certain level in the recorded PCM waveform may be different from the time in reality.

This is equivalent to saying that the recorded waveform is different from the real one. Which is true if you are sampling with less than twice the highest frequency which will occur in the source.

The signal will be reconstructable to great accuracy if it is bandlimited to half the sampling frequency. It won't if it isn't. Why make it so complicated?

What is "time resolution"?

Reply #56
The example of the 'tekkie' locating the spike with a record too precisely was a straight forward one. 
The rebuttal of the example that the spike could indicate any position therefore it must indicate the true position securely was invalid for the reason that to ensure the precise positioning of the spikes peak with the true peak, all the other samples would have to be employed to refine that single detail, and they cannot normaly be employed just to do that as they have to convey their own detail as well.


This is the heart of your misunderstanding.

A bandwidth limit (we agree there is such a thing in PCM) implies that, what you believe to be some kind of contradiction, is in fact the simple reality of the situation. Let me show you why with something less abstract...


For it to work properly, PCM requires two filters - one anti-alias at (before) the A>D, the other anti-image at (after) the D>A.


Forgetting PCM for a second, if those filters themselves cause an audible problem, then we have a problem. I don't think we do. However, you have expressed a wish to tackle this issue separately, so let us leave it to one side for now.


So, we have two filters. If _both_ filters block everything above fs/2, then the sampling stage itself will be transparent - lossless, if you like. In other words, these two systems would be identical...

1. input, filter1, filter2, output
2. input, filter1, sampling (no quantisation), filter2, output

Indeed, if you have two black boxes containing systems 1 and 2, there would be no way to tell these boxes apart (though number 2 may introduce a time delay in practice).

If you believe this to be false, you must bring something to disprove it. (This would disprove Nyquist, so good luck!).


Your example of non-adjacent samples having an impact on the apparent position of an inter-sample peak does not disprove it - this is just a consequence of the required filtering. Even a novice in filter design knows that output sample number N depends on the value of more than one input sample, unless the filter is a non-filter! This is all that is at work here. It's not magic. It's not a problem either.

Cheers,
David.

What is "time resolution"?

Reply #57
Here are some nice pictures...

[attachment=2591:attachment]

I worked at 16-bits throughout.

I started at 441kHz (i.e. 10x CD sample rate). I generated a single impulse. To prove the point, I also generated a second impulse one sample later on the other stereo channel. This is the left hand pair of plots.

I resampled to 44.1kHz (i.e. CD sample rate). The result is shown in the middle pair of plots. Interestingly, Cool Edit Pro's visual interpolation hints at what is represented by those samples - i.e. a 1/10th of a sample time delay between the two channels.

I resampled back to 441kHz. The result is shown in the right hand pair of plots. The peaks of the waveforms are clearly in the correct place relative to each other. Time resolution equivalent to 1/10th of a sample at 44.1kHz clearly survives this sample rate.


Of course the peaks are low amplitude (less energy) and longer (spread out in the time domain) - but this is just what happens when you low pass filter a click.


So, this is a simple, repeatable example proving the sub-sample accuracy of sampled systems, without a sine wave in sight!

Cheers,
David.


What is "time resolution"?

Reply #59
So I was mainly pissed off in my earlier post because it looked like this was about to turn into a rematch of ChiGung vs. the world. Which wound up happening, and it's not like I agree with him on much, but I already went through all of that on SH.tv.

The formula described by KikeG (and others) does help, but it doesn't really satisfy me. It seems to coincide well with what I've computed (with 1/20000 sample delays being feasible) - I suppose that the 1/(fs*2^n) number is a theoretical limit rather than an upper bound on period, and depending on the vagaries of the upsampling/downsampling implementations, the real testable interval may wind up being much higher. (In theory, I ought to be able to get a 1/65536 sample delay working?)

This gives a lot of wiggle room for audiophiles to claim that there could be large differences in performance based on how good the upsampling/downsampling filters are, resulting in numeric performance improvements to the minimum reproducible delay. However, one could pretty conclusively argue that even the implementation-tested periods are lower than the minimum audible delays by a wide margin. And it does give an exact definition to beat people over the head with, which is what I was wanting.

What is "time resolution"?

Reply #60
Yeah you did forget that. There is no downsample involved there, just a shifting of a record.


Now you're simply being evasive.

Since the signal in question will not be affected by any decent lowpass filter (it has no out-of-band components) your assertions are shown to be wrong.
-----
J. D. (jj) Johnston

What is "time resolution"?

Reply #61
The formula described by KikeG (and others) does help, but it doesn't really satisfy me. It seems to coincide well with what I've computed (with 1/20000 sample delays being feasible) - I suppose that the 1/(fs*2^n) number is a theoretical limit rather than an upper bound on period, and depending on the vagaries of the upsampling/downsampling implementations, the real testable interval may wind up being much higher. (In theory, I ought to be able to get a 1/65536 sample delay working?)


Well, consider this...

Let us take a sine wave at nearly half the sampling frequency...

It's slope (using +-1 for amplitude) is 2*pi*f at maximum (crossing zero).

You want to figure out when you can distinguish one LSB. That's when
2*pi*f > 2/(2**bits).

Give or take.  Since we have to dither, the numerator of the right side is bigger by some extent.

This is where the number comes from. It's a classic phase analysis problem.

Of course, when we dither, we can also average many cycles and get a better result.

Now, as strange as it seems, the single-cycle example is directly germane to the question at hand.

I leave it to ChiGung to explain to us how this completely controverts all of his assertions completely.
-----
J. D. (jj) Johnston

What is "time resolution"?

Reply #62
Hello all, I left this discussion in a tizz and have only just checked again to find these constructive replies.

A 'thought' experiment occured to me which would illustrate my point about PCMs 'time resolution' -which could be performed computationaly to generate precise data... Ill set it out hopefully:

I assume 'resolution' can refer to the ability to resolve discrete details of source material in pcm records. And there must be a difference in the potential of detail resolution in 'the source' and that in 'the record'.
eg if the source was just a CD, and the target record was 11kHz pcm,
- then their potential to have detail resolved in them would differ.
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'

oddly, here and elsewhere this formula is eclipsed with not insubstantial excursions into test pattern replications and counter intuitive textbook quotations.

If i needed exact data on the capability of 'time resolution' in PCM records, here is how I would go about generating it:
Write a small program to read in pcm, and locate exact time of specifiable conditions in it. Conditions such as (level=0) or level=p(test),
or gradient =0, or gradient =p.
To avoid porting in or trying to write my own bandlimited solution of the pcm record, id write the code for simple linear interpolation and feed it high quality upsamples in order to achieve near 'bandlimited accuracy' in discernment of 'time location' of 'conditions'.

So a program can read in a bandlimited upsample of source pcm, and generates a list of times of all matching conditions which are found/resolvable within the upsample.

eg. hq upsample of cd track at 44kHz to ~192kHz
>list of times of peaks and troughs (gradient=0) found in 192kHz pcm rendering.

Next, high quality downsample the cd track to 11kHz (1/4 sample rate)
Then upsample to 192 again (for hq interpolation), and generate its list of (gr=0) times.
//(the upsample to 192 is only to facilitate high quality bandlimited interpolation)

At this stage in the thought experiment, I would note, that although both list of timings look for the same condition, there may be considerably less occurences of the condition (peaks or troughs) found in the predownsampled record, which would depend on the nature of the source material.

The two lists plotted on a graph should illustrate an observable time correlation between conditions found in each record, as well as 'orphaned' conditions represented only in the higher pcm.

Disregarding the orphaned conditions, the detail of 'time resolution' rests on how closely correlated the pairable condition times turn out to be.
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?

Additional explores: Compare accuracy of correlation of surviving details, in CD to 11kHz, then CD to various other rates. White noise, to some rates, then pink noise..etc.. Also do some comparisons in using different upsample rates, to discern the programs simpler linear discernments inherent innacuracy.

If we can spot a condition occuring at a time in a pcm record, with correlation data, we could indicate probabilities of that condition occuring within temporal distances in higher sampled records of the same kind of source material

It would be interesting to look at.

If I ever spend my sparse powers of collected concentration to generate the info myself, Ill post it here for all your troubles'

best'
cg
no conscience > no custom

What is "time resolution"?

Reply #63
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


For a fixed frame size,

44kHz pcm 'frequency resolution' = 4*11kHz pcm 'frequency resolution',

44kHz pcm 'time resolution' = 0.25*11kHz pcm 'time resolution'

What is "time resolution"?

Reply #64
ChiGung,

Your experiment wouldn't work. By knocking the sample rate down to 11kHz (and implicitly limiting the bandwidth to 5.5kHz) you would change the waveform dramatically. For simple synthetic waveforms, we could say correctly whether none, some, or all peaks would stay in the same place depending on the content of the original signal. However, for complex waveforms, we can't say anything sensible about what would happen to individual waveform peaks.

For example, if you have a bass drum and a high hat playing at the same time, most of the waveform excursion will be due to the bass drum (which will survive the 5.5kHz low pass filter), but the exact peak location will also depend on the "wiggles" in the waveform due to the high hat itself. These high frequency "wiggles" will be butchered by a 5.5kHz low pass filter, so the peak will move!

The only way you can be absolutely sure that it's a fair experiment, and that the low pass filter isn't significantly moving the peak by removing part of the signal that forms the peak itself, is to ensure that the low pass filter doesn't remove anything - i.e. that the original doesn't contain any frequencies above 5.5kHz, or, to put it another way, that the downsampled version still satisfies Nyquist with reference to the original content.

Nyquist is right yet again - what a surprise!



The basic problem is here:

Quote
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


You are implying that these two things are directly proportional in a real and limiting sense, whereas, until you get to the absolute limit, many orders of magnitude better than the limits of human hearing, and many orders of magnitude better than anything we expect the system to achieve, the two things are completely independent.


Rather than talking about sample rate and "time resolution", let's talk about the number of hours I'm awake in a day, and the number of bananas I eat that day.

Fundamentally, the number of bananas I eat if I'm awake for 8 hours "should" seem to be 1/2 of that if I'm awake for 16 hours.

What's wrong with this statement? On the face of it, it seems like intuition. However, the reality of the situation is that the number of bananas I eat in a day has nothing to do with how long I'm awake. I might not have any bananas in the house, and I might not go shopping. I might go to the market I buy a big bunch of bananas at a bargain price and eat several of them. I'll probably just have one a day for my lunch (I'm so boring and predictable) no matter how long I'm awake. The simple truth is that, whatever blind intuition may try to tell you, the practical real world truth is that the number of bananas I eat in a day is completely independent of how many hours I'm awake.


Similarly, the time resolution of a PCM system is independent of the sample rate!



Quote
oddly, here and elsewhere this formula is eclipsed with not insubstantial excursions into test pattern replications and counter intuitive textbook quotations.


"oddly"?! What's odd about reality not conforming to blinkered misguided uninformed intuition?!


As for "excursions into test pattern replications" - if you look carefully at my previous post, and the waveforms, I've performed your latest thought experiment already - but with the only type of waveform where it will work - a carefully controlled one!


Finally...
Quote
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?


In a suitably controlled version of your experiment (like mine) it wouldn't be a bell curve, it would be a single point! That would certainly be the case with the parameters you propose (44.1>192kHz).

If you try it with real music, it might be a bell curve, or it might be some other distribution (with a cut off corresponding to the point you decide the peaks don't match) - but that's got nothing to do with the temporal resolution of PCM, and everything to do with an experiment where you change something intentionally at random, and then measure how much you've changed it!

Cheers,
David.

What is "time resolution"?

Reply #65
......... For example, if you have a bass drum and a high hat playing at the same time, most of the waveform excursion will be due to the bass drum (which will survive the 5.5kHz low pass filter), but the exact peak location will also depend on the "wiggles" in the waveform due to the high hat itself. These high frequency "wiggles" will be butchered by a 5.5kHz low pass filter, so the peak will move!

The experiment cant not work, it is just designed to generate the surviving correlation distribution data, so that we can refer to data about different sample rates relative and absolute abilities to accurately record timings of discrete conditions in source. You restate the practical 'damage' done to 'time resolution' of isolateable conditions in natural sources (saying> the exact peak location will also depend on "wiggles" butchered by a 5.5kHz lowpass) Documenting the average degree of that 'butchery' is the purpose of the experiment, no more, no less.

Quote
The only way you can be absolutely sure that it's a fair experiment, and that the low pass filter isn't significantly moving the peak by removing part of the signal that forms the peak itself, is to ensure that the low pass filter doesn't remove anything.
- i.e. that the original doesn't contain any frequencies above 5.5kHz, or, to put it another way, that the downsampled version still satisfies Nyquist with reference to the original content.

That is plainly not fair. You are assuming preconditions which mean only informationaly lossless downsamples are considerable. I think you are unwilling to broaden your examination of 'reality' to a degree which would qualify the objections I have made about reported subsample 'time resolution' capabilities. I have been talking about reality. When we want to know what the timing resolution of a pcm record is, we would fundamentaly compare the capabilites of a record to the full potential of an ideal source.
The experiment would compare the capabilites of lower rates with higher rates. Presupposing all records in the higher rates must be additionaly bandlimited as lower rates, is not sensible. 

Quote
Quote
Fundamentaly the 11kHz pcm's potential to resolve detail 'should' seem to be 1/4 of the CDs 44kHz record.

That would go,
44kHz pcm 'time resolution' = 4* 11kHz pcm 'time resolution'


You are implying that these two things are directly proportional in a real and limiting sense, whereas, until you get to the absolute limit, many orders of magnitude better than the limits of human hearing, and many orders of magnitude better than anything we expect the system to achieve, the two things are completely independent.

Now you are talking of psychoacoustics. That is an entirely different matter, "what differences could we hear". I described a process to generate the correlation data of timing of surviving conditions between fully utilised (non extraneously bandpassed) pcm sample rate records. The data will "scale" according to simple principles. The time resolution of 1 Hz will be equal to 1/44100th of the time resolution of 44100 Hz - there is no doubt about that relationship.

Quote
Quote
A plot could be made of their distribution of correlation, perhaps it would tend to be a bell curve? for pink noise only? What would the limits of correlation be?


In a suitably controlled version of your experiment (like mine) it wouldn't be a bell curve, it would be a single point! That would certainly be the case with the parameters you propose (44.1>192kHz).


That would be the pointless version of the experiment - or rather one which just examines rounding error.

I have the code mostly written to perform (comparison of different sampling rates time resolution of conditions with various sources) I will hold off finishing it until it is acknowledged here that it presents a valid investigation (if done accurately enough and naturaly -without flattering extraneous bandpassing)

l8r,
cg
no conscience > no custom

What is "time resolution"?

Reply #66
So, in short, you want to run an experiment to see what effect a low pass filter has?

What is "time resolution"?

Reply #67
So, in short, you want to run an experiment to see what effect a low pass filter has?

Yes.

As particular sample rates, do have implicit unavoidable lowpasses -the process of comparing the capabilites of different samplerates, refactors as comparing effects of different lowpasses. It is almost the same thing, although actualy doing the full downsample (as well its implied lowpass) investigates an attained quality of the full process, so would preferable for this charge for actual proof of subsample source/record ambiguity.
no conscience > no custom

What is "time resolution"?

Reply #68
It is almost the same thing, although actualy doing the full downsample (as well its implied lowpass) investigates an attained quality of the full process, so would preferable for this charge for actual proof of subsample source/record ambiguity.
There are two issues at stake here. The first is the question of audibility of low pass filters. This has been dealt with here and elsewhere at great length and could be easily rigourously tested. Such a test has been done before, but a repeat including filters with non-flat phase responses might offer some new information.

The second is that you seem to doubt whether lowpass->sample->reconstruct can be shown to have the same effect as just the lowpass. Without quantization, the theory says that the two processes are identical. If you wish to question this then a mathematical treatment will probably be necessary before your demonstration is accepted.

What is "time resolution"?

Reply #69
There are two issues at stake here. The first is the question of audibility of low pass filters. This has been dealt with here and elsewhere at great length and could be easily rigourously tested. Such a test has been done before, but a repeat including filters with non-flat phase responses might offer some new information.

Audibility of the timing capabilities has never been my interest here, so I have avoided refering to it and isolated it as extraneous to the empirical measurement or estimation of time resolution, whenever it has been brought up.

Quote
The second is that you seem to doubt whether lowpass->sample->reconstruct can be shown to have the same effect as just the lowpass.

That is not my contention, I have recently acknowledged these processes are potentialy identical. Their equivalence does nothing to invalidate the 'coupling correlation between sample rates' test described. Their equivalence only provides an accelerated to method of generating the data.

Quote
Without quantization, the theory says that the two processes are identical. If you wish to question this then a mathematical treatment will probably be necessary before your demonstration is accepted.

I havent questioned the equivalence of:
A high quality downsample followed by high quality upsample
= A high quality lowpass to the downsamples nyquist frequency

I have suggested that the locations of any isolatable conditions in a normaly utilised source record (ie with energy potentialy up to its own nyquist freq), can be correlated with the the best fitting locations of the same conditions in a downsampled (or equivalently lowpassed) record > to provide data (with bias towards best fits) on how accurately time of conditions can be resolved in an implicity bandlimited pcm record, against their actual potential placement in source material/records.
no conscience > no custom

What is "time resolution"?

Reply #70
I wish you understood the theory CG, because without it, I can't begin to explain the complete and utter pointlessness of what you're suggesting.

It's a fair enough experiment to ask an undergrad to do in order to practice computer programming and audio processing, but in terms of what it actually tells you about anything, all I can do is just sit here slowly shaking my head!


FWIW, given a random selection of audio signals (real or synthetic) the lower the low pass filter, the further the peaks will move (and, to say the almost same thing differently, the more peaks will completely disappear). The major stumbling block to doing the experiment exactly as you propose will be in determining when a peak has moved vs when a peak has vanished - or, to put it another way, tracking the "same" peak between different versions. Various possible attempts to do this "correctly" (and it will be near-impossible) will mean your results might be unexpected!


The major problem is that every reasonable definition of time-resolution leads to a proof that PCM audio has no issues with time resolution - so now you've invented a new definition in order to prove the opposite. Your success here will not be down to your experiment (which will certainly show some change), but down to your strange definition of time resolution.

Cheers,
David.

What is "time resolution"?

Reply #71
I also don't see the point in checking the positions of zero crossings or peaks after lowpassing. This won't prove anything except that if you further limit the bandwidth of a signal these points may move, vanish, or appear at places where there previously havn't been any.

Assuming the lowpass filter's impulse response is symmetric the following is true: If your signal shows a certain symmetry within an interval with the same size of the lowpass' filter response a certain class of points within that interval will be at the exact same position.

Example signal:
first two harmonics of a square wave: you'll get 4 peaks within a cycle
after lowpassing (only the fundamental left): 2 peaks within a cycle (it's a sine)
The zero crossings are the same (two within a cycle) because there's a "point symmetry" at those points. (rotate the curve around the point 180° and it'll be the same)
Tell me what we have learned by that, ChiGung.

In the context of transform coding time resolution usually refers to the partition of the time/frequency plane that's done by a critically-sampled filterbank AFAIK. Without any noise shaping filter tricks this will effectivly limit how well we can control the quantization noise distribution in specific time/frequency regions only by choosing scalefactors. However, noise shaping filters can be and usually are used to improve this. (With "ANS" enabled Musepack can do better in terms of controling the noise's distribution in the frequency domain than what the filterbank suggests --one subband is 670 Hz wide, though ANS manages to shape the noise within a subband. With "TNS" enabled AAC can do better in terms controling the noise's distribution in the time domain than what the filterbank suggests.)

What is "time resolution"?

Reply #72
It's a fair enough experiment to ask an undergrad to do in order to practice computer programming and audio processing, but in terms of what it actually tells you about anything, all I can do is just sit here slowly shaking my head!

Yet you cant explain what is pointless about generating the data described, without indestinct reference to some 'theory' which you believe I dont understand.

Or is there an attempt here...
Quote
FWIW, given a random selection of audio signals (real or synthetic) the lower the low pass filter, the further the peaks will move (and, to say the almost same thing differently, the more peaks will completely disappear). The major stumbling block to doing the experiment exactly as you propose will be in determining when a peak has moved vs when a peak has vanished - or, to put it another way, tracking the "same" peak between different versions. Various possible attempts to do this "correctly" (and it will be near-impossible) will mean your results might be unexpected!

I dont need to be informed of possible surprises. I understand very well what you have written there, it is the very situation that I have described repeatedly in this thread re: the 'time resolution' of PCM. I understand what your prefered theoretical statements about 'time resolution' are, and because I have understood what they are not, I have brought the practical situation to your attentions - that the 'spike', the 'radar blip', the 'cymbal peak' etc. cannot be confidently estimated much beyond the sampling interval - precisely because[/i] of the unknown butchery of higher frequency information in the normaly utilised source - the true situation is as you, and as I have described.

Only you consider the true situation completely and utterly pointless to investigate.
And it seems many have felt patronised by my attempts to explain, that it is not utterly pointless to try to correlate actual conditions within record types - that it is in fact how you securely measure such accuracy of correlation. Measuring what is practicaly achieveable in real, normal (not extraneously bandpassed) records.

Quote
so now you've invented a new definition in order to prove the opposite. Your success here will not be down to your experiment (which will certainly show some change), but down to your strange definition of time resolution.

My invented definition of time resolution in pcm? - the ability to discern the times of conditions in a pcm record in contrast to the original (natural resolution) material which the pcm is merely a record of.

I think you guys have been mostly presenting the potential 'time resolution' of related partly analogous algebraic systems.
no conscience > no custom

What is "time resolution"?

Reply #73
I also don't see the point in checking the positions of zero crossings or peaks after lowpassing. This won't prove anything except that if you further limit the bandwidth of a signal these points may move, vanish, or appear at places where there previously havn't been any.

It would just document our ability to compare time pin-pointable conditions in a waveform and provide a method of correlating conditions within different pcm records that fully utilise their sample rates.
It would document the confidence with which we can resolve such indicateable conditions in a waveform in comparision to what would be possible with a natural record of near infinite 'time resolution'
It would investigate actual data, rather than the isolated formula presented here which indicate 'algebraic resolutions' or 'time resolution of presumed lossless conversions' 

Quote
Example signal:
first two harmonics of a square wave: you'll get 4 peaks within a cycle
after lowpassing (only the fundamental left): 2 peaks within a cycle (it's a sine)
Tell me what we have learned by that, ChiGung.

Someone may learn, that downsampling can not only damage time resolution, but also the topology of the waveform.

Quote
In the context of transform coding time resolution usually refers to the partition of the time/frequency plane that's done by a critically-sampled filterbank AFAIK. Without any noise shaping filter tricks this will effectivly limit how well we can control the quantization noise distribution in specific time/frequency regions only by choosing scalefactors. However, noise shaping filters can be and usually are used to improve this. (With "ANS" enabled Musepack can do better in terms of controling the noise's distribution in the frequency domain than what the filterbank suggests. With "TNS" enabled AAC can do better in terms controling the noise's distribution in the time domain than what the filterbank suggests.)

I dont argue, you have a valid context there, but I have been explicity writing at length about the context of practical recovery of the source material from provided PCM records. There seems to be difficulties with getting the context of practical source reproduction examined.
no conscience > no custom

What is "time resolution"?

Reply #74
I happened to code a subpixel detector for "x-corners" (checkerboard corners) for the purpose of calibrating cameras. Luckily it can be shown that the areas around these x-corners show the mentioned symmetries which enables me to accurately measure the subpixel position of those x-corners (=saddle points) by analysing an optically-low-passed-&-sampled image of a checkerboard. Simulations showed that the real bottleneck is the censor noise actually. Without (simulated) censor noise I got an accuracy of 1/300 pixel -- possibly restricted by a little bit of aliasing that's left in the image generation / subpixel-detector code.

The interesting part is: If you capture an image at high resolution with some censor noise and use a high quality resampler to reduce the image resolution you'll get pretty much the same locations for those x-corners -- meaning that the subpixel accuracy increased by the same factor I downsampled the image. In fact, lowpassing is an integral part of the detector to minimize the effect of noise it has on the estimated x-corner positions. So, it's not surprising that the subpixel detector's performance (measured in pixels) was better on the smaller image. By your definition of time resolition (spatial resolution for images) this would mean that the two images would have the same spatial resolution. But of course the 2nd one is a downsampled one which doesn't look as sharp. So what good is your definition?