Skip to main content

Topic: The slow and painful death of an opus file (Read 3440 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • Deathcrow
  • [*]
The slow and painful death of an opus file
Heya!

I wanted to find out how Opus behaves over many generations of transcodes/re-encodes of the same 30s sample. So I used this simple bash command to create 9999 generations of encodes:

Code: [Select]
 for i in {0001..9999}; do opusdec --force-wav gens/$(printf "%0*d\n" 4 $[$i-1]).opus - | opusenc --vbr --bitrate 128 - gens/$(printf "%0*d\n" 4 $i).opus; done

To get the obvious out of the way: Generation 9999 sounds like crap! ... but I was also kind of impressed that the song was still recognizable and it seems to me as if the degradation seems to plateau more and more (less and less differences). To illustrate here's the audio differences between generation 0001 and 5000:



... and the differences bettween 5000 and 9999:



Maybe someone else with some time on their hands wants to continue this for (much) longer, to see how long it takes until it becomes entirely garbled.

Just a quick technical comparison of generation 0001.opus and 9999.opus via opusinfo:

0001.opus
Code: [Select]
Opus stream 1:
Pre-skip: 312
Playback gain: 0 dB
Channels: 2
Original sample rate: 48000Hz
Packet duration:   20.0ms (max),   20.0ms (avg),   20.0ms (min)
Page duration:   1000.0ms (max),  971.0ms (avg),  100.0ms (min)
Total data length: 563445 bytes (overhead: 1.18%)
Playback length: 0m:30.082s
Average bitrate: 149.8 kb/s, w/o overhead: 148.1 kb/s
Logical stream 1 ended

9999.opus:
Code: [Select]
Opus stream 1:
Pre-skip: 312
Playback gain: 0 dB
Channels: 2
Original sample rate: 48000Hz
Packet duration:   20.0ms (max),   20.0ms (avg),   20.0ms (min)
Page duration:   1000.0ms (max),  971.0ms (avg),  100.0ms (min)
Total data length: 515641 bytes (overhead: 0.921%)
Playback length: 0m:30.082s
Average bitrate: 137.1 kb/s, w/o overhead: 135.9 kb/s
Logical stream 1 ended

The average bitrate has decreased significantly at this point, but is still higher than I would've expected.

As an aside: When listening to 'older' generations it's interesting to me that instead of degradation the biggest 'annoyance' is the introduction of additional artifacts on top of the original audio that seem to get louder and louder (successively adding on top of each other?) over time.

The spectrals also look - superficially - much better than I would have expected, but there's some very clear 'bald spots' visible in the higher frequency range:



Don't let Spectograms and Waveforms fool you, it sounds more terrible than it looks (there's a lesson in here somewhere about not relying on waveforms to judge sound). I'll try to attach generations 0001 and 9999 to this post so you can have a listen for yourself.

For anyone who wants to look at the whole progression, here's the whole archive: gens.tar.xz (watch out, this extracts to ~5GB of files)

In conclusion: Don't transcode lossy files, but if you absolutely have to, Opus seems to do okay in the scenario, especially during the very first few generations (it's hard to tell if there's any audible difference at all, but I suck at ABXing).

Not sure if this is of interest to anyone else, but I just decided to share...
  • Last Edit: 08 July, 2017, 04:10:09 PM by Deathcrow

  • quadH
  • [*]
Re: The slow and painful death of an opus file
Reply #1
0100 sounds a lot better than I thought it would.

  • Fairy
  • [*][*][*]
Re: The slow and painful death of an opus file
Reply #2
Curious how a 256kbit/s sample would stand the test.

  • jmvalin
  • [*][*][*][*][*]
  • Developer
Re: The slow and painful death of an opus file
Reply #3
Keep in mind that exact alignment of the audio samples has a huge impact on cascading quality here. For example, if you were to just add a (different) random time offset to every generation, you would see the quality go down much more drastically. This is called "asynchronous tandeming" (vs synchronous tandeming). In your test, Opus frames for one generation are perfectly aligned with the frames for previous generations and that helps a lot. This is true for every codec I know of (except that Vorbis has no way to *maintain* that alignment because of short frames).

  • Deathcrow
  • [*]
Re: The slow and painful death of an opus file
Reply #4
Ah, that makes sense, thanks for the insight jmvalin. Since I was mostly interested in the effects of dumb repeated re-encoding of the same file I won't try to make it harder (artificially).

I've actually kept going a little bit.... currently at generation 303685... and it's still not completely dead. There's loud and dominant artifacts everywhere now though.

Maybe my expectations were way off, but I expected this to be a much faster process. I'll have to try something similar with MP3 after this - as some kind of control.

Re: The slow and painful death of an opus file
Reply #5
I just listened to generation 303685. I can't believe how good it sounds, seriously. It still resembles music

  • bennetng
  • [*][*][*][*][*]
Re: The slow and painful death of an opus file
Reply #6
I'd like to know the result of transcoding between different formats. For example opus > aac > opus > aac... 10000 times.
vs
opus * 10000 times and aac * 10000 times.

  • Deathcrow
  • [*]
Re: The slow and painful death of an opus file
Reply #7
I've actually kept it going for a little while and after almost 1m generations there's still parts of the song that can be heard.

Continuing onwards from this point seems untenable without buying some Cloud service CPU time (or a new computer). I'm curious whether it is a coincidence that the biggest (length and loudness) artifact seems to be right in the middle of the file. I like that the file size has (slightly) started to go up again, seems like Opus started to introduce some new and unique soundeffects at some point ;)
  • Last Edit: 15 July, 2017, 10:59:29 AM by Deathcrow

Re: The slow and painful death of an opus file
Reply #8
Can we draw any conclusions from this?

  • saratoga
  • [*][*][*][*][*]
Re: The slow and painful death of an opus file
Reply #9
Can we draw any conclusions from this?

If you time align samples when repeatedly transcoding them, each pass will probably be transformed and quantized similarly, resulting in only the very slow accumulation of quantization and rounding error.