Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: reduction of quantization noise by taking average of multiple encoded  (Read 4751 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

reduction of quantization noise by taking average of multiple encoded

Youtube hosts lots of music transcoded using faac and quality isn't great. As popular tracks are usually uploaded by multiple users, one might reduce encoding noise by averaging waveforms of duplicates (can be done either in client app or intermediate server). Noise std should decrease as 1/sqrt(N:). Do you guys think this would work?

reduction of quantization noise by taking average of multiple encoded

Reply #1
Youtube hosts lots of music transcoded using faac and quality isn't great. As popular tracks are usually uploaded by multiple users, one might reduce encoding noise by averaging waveforms of duplicates (can be done either in client app or intermediate server). Noise std should decrease as 1/sqrt(N:). Do you guys think this would work?

If the sources are all identical rips from CD and the encoder/settings are all youtubes own, they might be all identical. In that case, averaging should give you nothing.

If different sources are used, and/or different encoders are used, you might have some improvement, but any phase-error would cause misalignement. In fact, there are many perception flaws that a lossy encoder might use to reduce the number of bits (without large audible errors) that might cause problems when averaging like you suggest.

-k

reduction of quantization noise by taking average of multiple encoded

Reply #2
1/sqrt(N:) only applies to uncorrelated noise. I suspect that a lot of the noise added by lossy encoding is not entirely uncorrelated.

I hope this question is of a purely academic nature, because of course the right answer is if you want the music then buy your own copy.

 

reduction of quantization noise by taking average of multiple encoded

Reply #3
I strongly suspect knutinh is right...  Phase/timing differences in the different lossy encoders will probably cause weird comb filtering effects.    (This will definitely happen if you mix two different analog recordings of the same-exact performance.)

It's EASY to try!!!    Just MIX the files and then normalize to bring-down volume.   Do you have an audio editor?  If not, Audacity is FREE!

Mixing is done by addition and volume adjustments are done by multiplication (up) or division (down), so this is an averaging process. 

Mixing two full-volume files will usually push the level above 0dB (the "digital maximum").  Most audio editors can handle levels above 0dB, but you should bring the levels down before saving to prevent clipping/distortion in the saved format.  And of course, if you re-save in a lossy format, that's an additional lossy-compression step.

Quote
I hope this question is of a purely academic nature, because of course the right answer is if you want the music then buy your own copy.
Good point...  If you find more than one copy of a recording, it's probably a copyrighted commercial release.