What is "time resolution"?
Reply #93 – 2006-11-16 17:51:32
The findings could apply to eg, [...] or (perhaps fancifully) estimating the edges of a region of different luminousity and/or spectroscopy in an image of space. See? You usually have a model for the kind of thing you search for. For example: Edge = curve that joins dark and bright areas in an image. Usually the signal you recorded is distorted in some ways (noise, nonlinear transfer, ...), so you have to account for that via preprocessing and stuff (oftentimes even a lowpass with Gaussian-like response is used for that, actually). I understand you are talking about knowledge of the artifacts/condtions searched for - and I recognise if the artifact is known to span several samples and has its own consistent reliable form through a period or periods of time, its form can be traced throughout multiple samples, allowing increased accuracy of placement. This is the case i see with calculating phase delays, or checkerboard positioning - not so much with uncertain appearances in space, where we cant be sure of eg. a nebulas form or objects' edge consistency.What you can determine through simulation is that detector algorithm A locates feature type B (eg. edgex) with accuracy C (eg. +/-0.2 pixels) when given a signal with distortion D (eg. SNR of 20 dB). In these cases where the location of features is interesting (like the position of an x-corner for camera calibration or peaks of a cross-correlation to detect movements and stuff) zero-phase-lowpassing (to some extent ) hardly affects the accuracy that can be achieved. Most changes you encounter after lowpassing is due to the noise that has been filtered out so it can't disturb the estimated location anymore. Usually the parameter that is directly related to the accuracy that can be achieved is a combination of noise power and sampling rate (something like the ratio of sampling rate to the square root of the noise power). But increasing the sampling rate doesn't necessarily imply that the accuray you get with "detector A" will improve because you also collect more high frequency noise that isn't filtered out anymore ... This is an informative account of a practical situation but it should not detract from the case presented that in many applications, most relevantly -the selection of sample rates for human audio, the practical rates selected 44,32,24 etc.. do differ in bandwidth from potential sources, and that reduction in bandwidth does affect accurate recording of timeable events/conditions/features, which within audio are not usualy suitable for extra interpolations for locating knowable formations (some instruments might be predictable enough to try experimentaly, but in practice this isnt done). Anyhow ... I guess I can say that most of us don't agree with you that a definition of "time resolution" based on how peaks will move around, vanish or appear due to band-limitation makes much sense/is any practical. I believe if that is so, you are all kidding yourselves, because the situation that "peaks will move around, vanish or appear due to band-limitation " is unavoidably present at such audio ranges considered, and when you accept 'subsample accurate' reproduction at those ranges, you are simply deciding to dismiss the highlighted matter - that the pcm record cannot indicate any instantaneous condition's presence in their potential sources with 'subsample accuracy'. Why people should wish to ignore such practical uncertainty of source timing (in R&D discussion) -i do not know. regards'edit: tpyos & phrasing