Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: missing samples (Read 6372 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

missing samples

Hi,
I have a question about missing samples, or non-uniform samples. I found out that audio samples can be reconstructed if they are oversampled. My question is can a 20 second oversampled audio file be reconstructed from the first or last 10s of the same recording, or recover a missing audio from what ever that you have. I know that this has something to do with Fourier transform, so, please, can someone explain me how it`s done, or can it be done? I suspect it is possible with simple uniform frequencies, but not with real life audio recordings.
Thanks

missing samples

Reply #1
What do you mean by oversampled? Do you mean that the highest frequency is much less than fs/2? If you think that a 44.1 kHz CD is oversampled then you are mistaken.

missing samples

Reply #2
If you have to use pre-existing knowledge of a signal to "aid" the reconstruction... you might as well regenerate the signal from scratch!

Apart from that, if you know the signal consists of several pure tones, you can analyze an uncorrupted portion of the signal to extract the various tones, and use that to regenerate the entire signal to a degree of accuracy. I'm not well-versed enough in the material to say exactly how successful you'd be.

missing samples

Reply #3
http://en.wikipedia.org/wiki/Oversampling
Yes, I meant less than fs/2, and no, I don`t think CDs are oversampled, I just wanted an answer in the more general sense of signal recovery. I searched more on the net for this and found that missing samples recovery is mostly used in signal processing, but it can be used in audio samples too.
Also I found a paper about recovering missing samples from oversampled band-limited signals http://www.ieeta.pt/~pjf/PDF/Ferreira92a.pdf . Besides equations that I can`t understand, there is a statement in it whitch says that a band-limited oversampled signal is completely determined even if an arbitrary finite number of samples is lost. Does this mean that, if we had an ovesampled signal with, for example, 4 times greater number of samples than the number of highest frequency in it, the signal can be restored completely and without losses in the case a fixed number of samples is missing?
I guess I wanted to know this: if we send an audio file thru unreliable network and receive only half the number of samples we had at the beginning (damaged/partial file), can we use the remaining half to restore the whole audio file? It would be impossible to do that if the file is not oversampled. When one half is rejected, there is no way to know how that half looked like. But, the question of how oversampling can help restore missing samples/audio/data still remains.

 

missing samples

Reply #4
If you are proposing a scheme for error recovery in lossy transmission of audio data, there are much more effective methods than oversampling by n (which increases the amount of data n-fold). For example, look at how error recovery on a CD works.

Edit: Also, you refer to samples having been lost, but isn't it much more likely that some samples will be corrupted instead, and if you don't know which ones were corrupted then you don't know which ones to use for regeneration and which ones to replace.

missing samples

Reply #5
If some audio signal is 4x oversampled then it is possible to reconstruct it using only each 4th sample. IOW, you can keep only samples #1, 5, 9, 13, ... and throw away samples #2, 3, 4, 6, 7, 8, 10,...

missing samples

Reply #6
But, and if I'm reading the original question correctly this is the key point, the missing samples need to be between samples you have.  If you are in possession of the first 10 s(econds?) that does not help you determine what the last 10 s(econds) are, oversampled or not.
Creature of habit.

missing samples

Reply #7
In other words, if the signal was 4x oversampled, unless you have at least every 4th sample, it is now undersampled and cannot be regenerated unambiguously.

missing samples

Reply #8
Thank you all, those are great answers. So, when you missing bulk samples, i.e. last tenth or last half, doesn`t matter, it is impossible to recover it, but, as @lvqcl pointed out, if you have every second, or forth, or eight sample (the power of two rule applies here, I assume), the recovery of all samples is possible without loss. It seams that, basicaly, it all comes down to sample resolution. I didn`t see the undersampled option, thanks for pointing it out, @pdq. And, of course, corrupted samples are another matter all together.

missing samples

Reply #9
the power of two rule applies here, I assume

No, it is only because Ivqcl arbitrarily used an oversampling factor that was a power of two.

If you oversampled by something that isn't so conveniently neat and tidy like 96/44.1 then you can't eliminate all but every x*n sample where n is some power of two (or some other integer that is not a power of two!) and retain the information.

missing samples

Reply #10
If you oversampled by something that isn't so conveniently neat and tidy like 96/44.1 then you can't eliminate all but every x*n sample where n is some power of two (or some other integer that is not a power of two!) and retain the information.

Do you talking about prime number (dividable only by itself and 1) as a number of samples here? Because in that case you would be wright, and for every integer there would be a remainder which would represent bulk loss of samples and information. I assumed the power of two because of Fast Fourier Transform algorithm which requires that condition. I believed that every sample reconstruction algorithm uses FFT, but I guess I`m wrong.

I`m also curious about the limit of missing samples. How much samples can you throw away before the recovery of those samples becomes unreliable or incomplete?

missing samples

Reply #11
Simple oversampling puts identical samples between the per-existing samples. 4X oversampling duplicates sample 1 three times, then come sample 2, duplicated three times, the sample 3, etc. There is no new information in the oversampled product and no loss of information from missing samples.

Used by itself, that is not as input to some recovery program, the oversampled signal with samples missing here and there would probably not be especially intelligible, let along normal sounding. It just has the potential to be correctly reconstructed by a program that knows what to expect.

missing samples

Reply #12
As was mentioned before there are other (and far better) ways to include redundancy for error correction. The point of simple oversampling is to reduce the complexity of the reconstruction filter.

My example of 44.1 to 96 is upsampling and perhaps not technically oversampling, but either way, we are dealing with adding extra bandwidth beyond what is needed to preserve the frequency content of the original signal. This will not necessarily be (and won't be except for the most trivial signals) helpful in restoring information lost in the time domain.

missing samples

Reply #13
Many of you have ignored the main thrust of his question and have therefore succeeded only in making a confused person even more confused.

Yes, if you have a 4x oversampled signal and only receive every fourth sample you can reconstruct a signal that will sound the same as your original one. So in such a case it might appear oversampling makes all the difference.

(Note- the reconstructed signal won't be precisely the same. The Whitaker-Shannon interpolation formula involves an infinite-length signal, infinite-time sinc functions, and infinite precision. Having finite precision means there's a noise floor, so your original signal won't be precisely band-limited, just band-limited to within the noise floor. But the differences will be inaudible.)

But knowing that every fourth sample is correct and other samples weren't received/were garbage is a very extreme situation, nothing like normal audio interpolation/restoration tasks. Normally you have blocks of missing/corrupt audio surrounded by blocks of known-good audio and you're trying to use context and interpolate to fill/fix the gaps. For this kind of task, resampling-type algorithms are no help.

Instead, you have to turn to things like least squares autoregression or Gabor atom fitting. For normal audio (speech, music, etc) these algorithms can do an excellent job of interpolating gaps of one ten-thousandth of a second (0.1 ms), a good job of interpolating 1ms, a somewhat-helpful-but-not-great job of interpolating 10ms, and have no hope of accomplishing much if you're trying to interpolate a gap of a tenth of a second (100ms). There's just way too much that can change in the course of a tenth of a second of normal audio. Information can't be created from nothing.

Note that as I've stated it the preceding time-quality tradeoff doesn't depend on sampling rate. From the papers I've seen, as long as the rate was high enough to capture the frequencies you want to reconstruct, any oversampling has only a minimal effect on your results.

If you had an algorithm which could reconstruct the last ten seconds of a twenty-second recording based on the first ten, you could use it to predict the future. With such an algorithm you could figure out what number would be called at the roulette wheel in time to place an absolutely sure bet.

missing samples

Reply #14
1. Establish a parametric model for the waveform generation
2. Have enough samples to setup the parameters
3. Use this model to estimate any unknown samples

Problem is, one usually wants to use such an algorithm on a wide range of waveforms (the model must be general). If the loss is e.g. 100ms, there will usually be sufficient innovation inside the loss that no model/training can fill the gap convincingly.

Now, if your soundfile consists of low-frequency sinewaves that are stationary over a timespan of 10s of seconds, you may be able to drop 100s of ms and fill in the missing data convincingly. Most of us dont listen to things like that.

Some pop music essentially consists of a "midi-file" containing note-ons, velocity, etc, and a fixed sound-bank of samples that are looped/modified. Throw in some vocals and sound-effects with a few tunable parameters. Most compositions consists of "rules" that means that missing notes can (to some degree) be predicted from the context and surrounding notes. If the snare drum hits on "2" and "4" throughout most of the song, chances are that it did the same in some randomly selected lost beat. Even in music that tries to break with patterns, such as 12-tone compositions by Schonberg, the consistent attempt to avoid tonal emphasis means that the uniform distribution of pitches introduce some pattern of its own.

What I am saying is that a lot of music is very far from random, unpredictable white noise, it is highly structured. The problem is that finding this structure automatically and robustly usually is beyond our capabilities. When the Pandora project wanted to have a database of high-level contextual description of music, they used a bunch of people to do the work.

-k

missing samples

Reply #15
Plus I think the question was more general, referring to any given signal and the possible processing thereof, rather than to concepts specific to music and its interpretation; as you have said, that is a lot more complex and even subjective, not as simple as interpolation and other types of filters that work on the samples only without any care for how they do/will sound.

If you had an algorithm which could reconstruct the last ten seconds of a twenty-second recording based on the first ten, you could use it to predict the future. With such an algorithm you could figure out what number would be called at the roulette wheel in time to place an absolutely sure bet.
Yeah, I just realised that was (or seems to have been) dave1’s question; before, I skim-read and presumed (generously?) that he meant something else.