Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: WavPack decoding complexity vs FLAC (Read 1457 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

WavPack decoding complexity vs FLAC

Hello, everyone.

I'm a WavPack user and I love its hybrid feature. Thanks for creating a very useful software. I just have a curious question.

I tried comparing the compression efficiency of WavPack versus FLAC, and while my tests were of course not scientific, I was able to get slightly more compression on WavPack (-x6 -hh) compared to FLAC (-8), although the advantage is almost insignificant on normal CD-quality sources. A significant WavPack advantage was found on a 96kHz 24-bit mono source (some emulated chiptune music) but those are of course uncommon audio files.

What made me curious was that WavPack in its absolute highest compression settings (-x6 -hh) took extreme efforts to encode while FLAC (-8) seemed to compress the files in just seconds (I have an Intel i9-9980HK). Personally, I don't mind the encoding time since I find WavPack's hybrid lossy encoding very useful for syncing songs to my phone, but I was curious on how difficult WavPack (-hh) files are to decode compared to FLAC (which I believe the compression rate does not affect decoding speed). Can someone please tell me, or how do I check for myself? (I use Linux if that matters.)

(This one is unrelated and I don't mind if you ignore this part--I have a general idea of how WavPack compresses samples, but how does the -h and -hh flags change the compression method? I know I can just read the source code but I tried and I don't know which file or line to check... Can anyone please simply explain it or point me to a file and line number?)

Thank you very much in advance!

Re: WavPack decoding complexity vs FLAC

Reply #1
If you feel adventurous --hh corresponds to CONFIG_VERY_HIGH_FLAG, and --h to CONFIG_HIGH_FLAG. grepping those shows you where they're used which you can then investigate to see exactly what is enabled/disabled(the tool grep is a good first step to picking apart unfamiliar code):
Code: [Select]
grep -r CONFIG_VERY_HIGH ./
./src/common_utils.c:        if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG)) {
./src/common_utils.c:            if ((wpc->config.flags & CONFIG_VERY_HIGH_FLAG) ||
./src/pack.c:    if (wpc->config.flags & CONFIG_VERY_HIGH_FLAG) {
./src/extra2.c:    if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG))
./src/pack_utils.c:    if (config->flags & CONFIG_VERY_HIGH_FLAG)
./src/extra1.c:    if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG))
./include/wavpack.h:#define CONFIG_VERY_HIGH_FLAG   0x1000  // very high
./cli/wvtest.c:        res = run_test_extra_modes (wpconfig_flags | CONFIG_VERY_HIGH_FLAG, test_flags, bits, num_chans, num_seconds);
./cli/wvtest.c:    else if (wpconfig_flags & CONFIG_VERY_HIGH_FLAG)
./cli/wavpack.c:                        config.flags &= ~(CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG);
./cli/wavpack.c:                        config.flags &= ~(CONFIG_FAST_FLAG | CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG);
./cli/wavpack.c:                            config.flags |= CONFIG_VERY_HIGH_FLAG;
./cli/wavpack.c:    else if (config->flags & CONFIG_VERY_HIGH_FLAG)
./audition/cool_wv4.c:        config.flags |= (CONFIG_VERY_HIGH_FLAG | CONFIG_HIGH_FLAG);
Code: [Select]
grep -r CONFIG_HIGH ./
./src/common_utils.c:        if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG)) {
./src/pack.c:    else if (wpc->config.flags & CONFIG_HIGH_FLAG) {
./src/extra2.c:    if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG))
./src/pack_dsd.c:    if (wpc->config.flags & CONFIG_HIGH_FLAG) {
./src/pack_utils.c:// o CONFIG_HIGH_FLAG           "high" compression mode
./src/pack_utils.c:        config->flags &= (CONFIG_HIGH_FLAG | CONFIG_MD5_CHECKSUM | CONFIG_PAIR_UNDEF_CHANS);
./src/pack_utils.c:        wpc->config.flags |= CONFIG_HIGH_FLAG;
./src/pack_utils.c:        if (wpc->config.flags & CONFIG_HIGH_FLAG)
./src/pack_utils.c:        int divisor = (wpc->config.flags & CONFIG_HIGH_FLAG) ? 2 : 4;
./src/unpack3_open.c:        wpc->config.flags |= CONFIG_HIGH_FLAG;
./src/extra1.c:    if (wpc->config.flags & (CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG))
./include/wavpack.h:#define CONFIG_HIGH_FLAG        0x800   // high quality mode
./cli/wvtest.c:        res = run_test_extra_modes (wpconfig_flags | CONFIG_HIGH_FLAG, test_flags, bits, num_chans, num_seconds);
./cli/wvtest.c:    else if (wpconfig_flags & CONFIG_HIGH_FLAG)
./cli/wavpack.c:                        config.flags &= ~(CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG);
./cli/wavpack.c:                        config.flags &= ~(CONFIG_FAST_FLAG | CONFIG_HIGH_FLAG | CONFIG_VERY_HIGH_FLAG);
./cli/wavpack.c:                        if (config.flags & CONFIG_HIGH_FLAG)
./cli/wavpack.c:                            config.flags |= CONFIG_HIGH_FLAG;
./cli/wavpack.c:    else if (config->flags & CONFIG_HIGH_FLAG)
./audition/cool_wv4.c:        config.flags |= (CONFIG_VERY_HIGH_FLAG | CONFIG_HIGH_FLAG);
./audition/cool_wv4.c:        config.flags |= CONFIG_HIGH_FLAG;

flac -8 is far from the slowest encode setting you can use for flac. -8p seems to be the slowest in common use with non-negligible efficiency gains, but there are many settings an expert can tinker with and you can basically make the encoder as slow as you like.

Here's some encode/decode stats but it looks like they only go up to x4hh, just plonk x6hh somewhere to the right of that for an estimate: https://hydrogenaud.io/index.php/topic,122508.0.html

Re: WavPack decoding complexity vs FLAC

Reply #2
Dig down here: http://audiograaf.nl/losslesstest/ .

There is hardly anything that decodes as fast as FLAC - no lossless audio compressor, at least - and that is part of what FLAC was designed for. Of course, twenty more years of computing power has made both megabytes and CPU cycles much cheaper.

Things to take note of:

* FLAC's "-8" is by no means the slowest setting. It is called "--best" because well, "best among what is useful". Indeed reference FLAC gives you access to "statistical filters" that can improve if you are willing to to pay. Chiptune? Try e.g.
flac --lax -8per15 -A "flatopp;gauss(7e-2);tukey(7e-1);subdivide_tukey(7)" -l 32
and watch the paint dry. It will be slow, and on chiptune a hunch would be that it is the "r" number that improves this much. If you think this is not slow enough, I can offer more "-A" functions to keep your CPU hot.

* The above command will not be FLAC "subset". If you are unlucky, an in-car unit and other hardware players could very well choke on the files you generate. They will be heavier to decode (the "prediction" for each sample will require a history of 32 rather than just 12). Without -l32 it would be much lighter. Without -l32 and with "r8" in place of "r15" it would be within subset. And would still be slow encoding; the "-pe" combines two brute-force elements and is typically not worth it. But with "-r6" and no "-l" switch, decoding will still be about like -7, and that is closer to -0 than to any other lossless codec.

* WavPack's -x settings "do not increase decoding complexity" (well actually, for nitty-gritty reasons they can even improve decoding speeds). They also employ a filterbank for improved compression. -x1 to -x3 are pre-defined filters that are coded in based on knowledge of how music signals typically behave (for CDDA!). -x4 to -x6 will "learn the filtering" from scratch without prior notion of what the music is like, and that is why -x4 can behave so different from -x3, especially on high resolution. Here you got some high resolution numbers: https://hydrogenaud.io/index.php/topic,120454.msg1004848.html#msg1004848 . You see that -x is fairly cheap, and -x4 is much better value for money than -x3; then -x5 and -x6 are expensive.

Both reference encoders can re-encode in-place, so if you first are just ripping CDs you can use a fast setting and then you can run re-encodes overnight / over week-end if that is what you want.

Re: WavPack decoding complexity vs FLAC

Reply #3
Personally, I don't mind the encoding time since I find WavPack's hybrid lossy encoding very useful for syncing songs to my phone, but I was curious on how difficult WavPack (-hh) files are to decode compared to FLAC (which I believe the compression rate does not affect decoding speed).

Looks like someone benchmarked some of the higher FLAC and WavPack settings on a Sandisk MP3 player:

https://www.rockbox.org/wiki/Main/CodecPerformanceComparison#NXP_i.MX233_w_47_454MHz_PCLK_40ARM926EJ_45S_41

FLAC is about 10-13 MHz on a slow ARM CPU for real-time decoding while wavpack highx6 was up to 50 MHz.  FLAC is several times faster, but both are stupidly fast given that modern devices have multiple CPUs running at GHz frequencies. 

Re: WavPack decoding complexity vs FLAC

Reply #4
Here is an example that flac beats wavpack hhx6 with a 16-bit 96kHz dual mono chiptune without using --lax. With --lax it can even be smaller. Of course, it may not be the case with other chiptune files, 24-bit or not.

Re: WavPack decoding complexity vs FLAC

Reply #5
Here is an example that flac beats wavpack hhx6 with a 16-bit 96kHz dual mono chiptune without using --lax. With --lax it can even be smaller. Of course, it may not be the case with other chiptune files, 24-bit or not.

And that's with ~8KB of padding.  The smallest I can make your example while still subset (if you consider variable blocksize to be subset) is this:


Re: WavPack decoding complexity vs FLAC

Reply #7
To answer the OP’s specific question, the difference between the WavPack modes is the number of decorrelation passes made. The WavPack decorrelator works by making successive passes over the audio samples using different filters. The number of passes varies from 2 in “fast” mode to 16 in “very high” mode (the default and “high” mode use 5 and 10, respectively). Obviously this extra work costs CPU cycles for both encode and decode (the passes are done in reverse order during decode to “undo” the decorrelation).

As was said, the “extra” modes just change the filters used, not the number of passes, so that cost is only incurred during encode (and can actually save time during decode because some passes might be skipped).

Another thing is that these levels are somewhat arbitrary. It would be possible to create a mode with 3 passes or 7 passes that would produce intermediate results, or even employ a different number of passes for every frame. This is because each frame contains the exact filter configuration (number and type of filters and their initial state) used in that frame.

Re: WavPack decoding complexity vs FLAC

Reply #8
Looks like someone benchmarked some of the higher FLAC and WavPack settings on a Sandisk MP3 player:

https://www.rockbox.org/wiki/Main/CodecPerformanceComparison#NXP_i.MX233_w_47_454MHz_PCLK_40ARM926EJ_45S_41
One thing kind of strange about those results is that 4-bit ADPCM is actually slower than FLAC, and twice as slow as "lossyFLAC".

That's a little crazy because 4-bit ADPCM essentially requires no work to decode, so there must be some other inefficiency there, like too small a read buffer or something.

Re: WavPack decoding complexity vs FLAC

Reply #9
As was said, the “extra” modes just change the filters used, not the number of passes, so that cost is only incurred during encode (and can actually save time during decode because some passes might be skipped).
I noticed this too when running benchmark but it is not explicitly mentioned in the manual, thanks for the explanation. I may consider using x4 in future encodes. As for flac, -pe is just too slow for my taste, variable block size support in future Xiph releases would be great.

Re: WavPack decoding complexity vs FLAC

Reply #10
* FLAC's "-8" is by no means the slowest setting. It is called "--best" because well, "best among what is useful".
Wow, thanks, I did not know that! (Admittedly I could have read the FLAC manual but for some reason, I have only ever read the WavPack manual that well).
That configuration is indeed extremely boring to encode--my laptop took about what, 2 hours to finish encoding that? The funny thing is that, even after waiting for that long, the FLAC encode was only 600 kilobytes smaller than the -hhx6 WavPack encode which took only several seconds. -hx6 is a bit faster than -hhx6 and the compression rate is virtually the same. Now I have bigger respect for WavPack's encode times.
* The above command will not be FLAC "subset".
I tried to make a more "subset-conformant" FLAC encode (-8per8) but that lost to WavPack again. This is really interesting stuff.
* WavPack's -x settings "do not increase decoding complexity"
Yes, I was referring to the -h and -hh settings, which affects decode performance according to the manual. But I did not know of the details so thanks for explaining them!
But reading saratoga's reply, even though FLAC is really light on decode resources with WavPack being slightly heavier, I didn't realize that AAC is that hard to decode, so I think WavPack's -h and -hh, along with other codecs such as Vorbis and Opus, are fine as they are.
[/quote]
Both reference encoders can re-encode in-place, so if you first are just ripping CDs you can use a fast setting and then you can run re-encodes overnight / over week-end if that is what you want.
I am using -x6b24 -cc for my music collection, but I'm considering reencoding to -hx6b24...

Thank you very much for the reply!

Re: WavPack decoding complexity vs FLAC

Reply #11
FLAC is about 10-13 MHz on a slow ARM CPU for real-time decoding while wavpack highx6 was up to 50 MHz.  FLAC is several times faster, but both are stupidly fast given that modern devices have multiple CPUs running at GHz frequencies. 
Especially since I didn't know that AAC requires ~200MHz just to decode, this data puts things into perspective. Thank you for sharing this!

Re: WavPack decoding complexity vs FLAC

Reply #12
To answer the OP’s specific question, the difference between the WavPack modes is the number of decorrelation passes made. The WavPack decorrelator works by making successive passes over the audio samples using different filters. The number of passes varies from 2 in “fast” mode to 16 in “very high” mode (the default and “high” mode use 5 and 10, respectively). Obviously this extra work costs CPU cycles for both encode and decode (the passes are done in reverse order during decode to “undo” the decorrelation).
Thank you very much for the explanation! I will have to read the source code for the technical details. I like learning about audio compression technologies and this is really interesting.

Re: WavPack decoding complexity vs FLAC

Reply #13
Especially since I didn't know that AAC requires ~200MHz just to decode, this data puts things into perspective. Thank you for sharing this!
Strange again. Here are what I got with this two-disc album encoded into a single file.
https://www.discogs.com/release/2452858-Andrew-Lloyd-Webber-The-Phantom-Of-The-Opera

Code: [Select]
System:
  CPU: 12th Gen Intel(R) Core(TM) i3-12100, features: MMX SSE SSE2 SSE3 SSE4.1 SSE4.2
  App: foobar2000 v1.6.16
Settings:
  High priority: no
  Buffer entire file into memory: yes
  Warm-up: no
  Passes: 1
  Threads: 1
  Postprocessing: none

WavPack fast x4 (713kbps):
Code: [Select]
Opening time: 0:00.000
Decoding time: 0:11.177
539.596x realtime

AAC-LC (195kbps):
Code: [Select]
Opening time: 0:00.001
Decoding time: 0:02.763
2181.658x realtime

Opus (195kbps)
Code: [Select]
Opening time: 0:00.000
Decoding time: 0:12.665
476.231x realtime

flac -8 (684kbps)
Code: [Select]
Opening time: 0:00.000
Decoding time: 0:04.554
1324.298x realtime

Not just a processor architecture thing I suppose?

Re: WavPack decoding complexity vs FLAC

Reply #14
my laptop took about what, 2 hours to finish encoding that?
I stacked up with a ton of slowdowns. Not at all intended to be useful. You can probably come within some kilobytes in a fraction of the time.
Here is roughly how FLAC does these things: Except the -l switch for the maximum prediction order, the variations in "complexity" (i.e. decoding time) are peanuts. FLAC will instead try several "simple" encodes and pick the best.
Unlike WavPack's decorrelation passes as explained by bryant, where every pass squeezes out more by decorrelating (provided that there are more patterns to be found - white noise won't have any of course!), FLAC will try a different one instead and choose that if it improves.
(Reference) FLAC does that along several dimensions - or can do so:

* Stereo decorrelation. Took me years to realize what FLAC (the format) actually does, but it is simple and smart for a brute-force: several codecs (including WavPack I think?) will run dual mono, and mid+side joint stereo and pick the best. But if you have to encode dual mono, you have to encode left and right separately; and every now and then, one of the channels is more compressible than the mid. FLAC can then store side + smallest {left, mid, right}. FLAC can also force dual-mono, and there is also a "smart faster" -M switch that ... uhm, that nobody uses I guess.
For multi-channel, the codecs do quite different things. FLAC MUST use dual-mono for non-stereo (big surprise to me given that FLAC isn't bad on multi-channel, I thought it would still decorrelate the main channels because why not). WavPack can group them together as pairs. TAK uses some (apparently smart!) heuristics to get a good correlation matrix, which explains why TAK absolutely slays the competition at multichannel (though it has not implemented >6). It costs time - but not decoding!
* Windowing function. The weighting of the signal. You saw those "four" functions I gave? They are more; "subdivide_tukey(7)" removes fractions of the signal (to get rid of statistical outliers), and it makes several functions (I think seven that each remove 1/7th, seven that include 1/7th and then one for the whole thing, but I could be wrong). But it recycles calculations so much that it is not at all the same time-consumption as calculating a ton of them.
FLAC doesn't brute-force this all the way down to the encode. It actually has a guesstimation procedure that picks the "hopefully best" without encoding the residuals (I think ...?) and then does that one properly.
* Rice partitioning. Once the max has been set, FLAC will calculate using the finest partition and then see if it saves space to merge.
* "-p" and "-e" will tell the encoder to brute-force the model used among two different dimensions. -p is the most interesting for CDDA: it is supposed to trade off the predictor precision: number of bits used, vs the goodness of the prediction. In principle, savings should be very small. In practice, the successive roundoffs may actually by chance happen to yield a better-fitting predictor (in addition to saving space). This is because the least-squares optimization used to calculate the predictor vector is not "optimal" in terms of size; least-squares minumum is very well associated with "size minimum", but not completely so. More here https://hydrogenaud.io/index.php/topic,120158.0.html
-p does not increase complexity. It just changes the predictor vector to something with fewer bits.

ktf will probably have to correct several of my misunderstandings (again!) but this is at least some of the essence of how FLAC instead of layering up with complexity, tries different shots of "equal complexity" and picks the best. For prediction order (actual order, not max order!) it might affect complexity. And also FLAC can give up on a noisy signal and store it uncompressed.


the -hhx6 WavPack encode which took only several seconds. -hx6 is a bit faster than -hhx6 and the compression rate is virtually the same. Now I have bigger respect for WavPack's encode times.
Well to nuance that:

* I picked some nonsensically slow FLAC option set. Don't judge a codec by its most stupid setting. (That is actually one reason for FLAC to tout its presets - you often see people asking "I want the the absolute maximum compression, what is it?" and the answer is "No, you don't want that!")
That is also a reason why TAK dropped the -p5 that was around in the test version. Knowing that TAK would be judged for the performance of the most extreme setting, Thomas Becker chose to only include the most extreme "reasonably fast" setting.
Also WavPack has included a quite extreme one in -x6. -hx4 or -hhx4 offer much better value for money.

ffmpeg's WavPack encoder (don't use it, it only covers file format version 4 and has some quirks to it) has -compression_level 0 to 8. Try -compression_level 8 and compare to wavpack -hhx6. It will be slightly smaller (for one thing, it lacks the WavPack 5 block checksums), but the time taken is not worth it.

WavPack is more complex and it is only expected that this pays off in terms of size. Notice that WavPack is an older algorithm (the file format was revised later though) than the others. For a nineties construction it is damn good, and the "-x" switches showed that it could be improved upon without breaking compatibility.


I am using -x6b24 -cc for my music collection, but I'm considering reencoding to -hx6b24...
As a FLAC user I would always want the "-m". Even if you use WavPack 5 with the block-level checksums (which can be verified without decoding using wvunpack -vv), I would use the MD5 even if only to identify the files.

Re: WavPack decoding complexity vs FLAC

Reply #15
Not just a processor architecture thing I suppose?
Partly. You're comparing a processor that has vector floating point units (SSE, AVX) which can do 8 floating point operations per instruction vs a CPU that doesn't even seem to have any floating point hardware. It might very well be a fixed-point implementation of an AAC decoder is much slower, while FLAC (and I presume WavPack too) is already fixed point to begin with.
Music: sounds arranged such that they construct feelings.

Re: WavPack decoding complexity vs FLAC

Reply #16
Here is an example that flac beats wavpack hhx6 with a 16-bit 96kHz dual mono chiptune without using --lax. With --lax it can even be smaller. Of course, it may not be the case with other chiptune files, 24-bit or not.

I want to play that game too  :))
1 868 458 bytes within subset (yours: 2 001 985)
Both figures after applying metaflac --dont-use-padding --remove-all to get everything equal.

(1 812 709 outside subset)


Edit: Chiptune signals are very peculiar indeed, and apparently weren't high on codec developers' radars:
1 900 633 for OptimFROG at --preset max --md5
2 030 216 for WavPack -hhmx6
2 187 250 for TAK -p4m -md5
2 499 712 for Monkey's "Extra High" (beating the 2 615 760 "Insane" which is also bigger than "High")

Re: WavPack decoding complexity vs FLAC

Reply #17
Especially since I didn't know that AAC requires ~200MHz just to decode
Strange again. Here are what I got with this two-disc album encoded into a single file.
AAC-LC
Not just a processor architecture thing I suppose?
Partly. You're comparing a processor that has vector floating point units (SSE, AVX) which can do 8 floating point operations per instruction vs a CPU that doesn't even seem to have any floating point hardware. It might very well be a fixed-point implementation of an AAC decoder is much slower, while FLAC (and I presume WavPack too) is already fixed point to begin with.
And it is ~200 MHz in aforementioned Rockbox test for AAC-HE, not LC. For LC it is ~70 MHz.


Re: WavPack decoding complexity vs FLAC

Reply #18
So here is HE-AAC SBR (61kbps), same test conditions as previous post:
Code: [Select]
Opening time: 0:00.001
Decoding time: 0:04.640
1299.489x realtime

Re: WavPack decoding complexity vs FLAC

Reply #19
Here is roughly how FLAC does these things: Except the -l switch for the maximum prediction order, the variations in "complexity" (i.e. decoding time) are peanuts. FLAC will instead try several "simple" encodes and pick the best.
Unlike WavPack's decorrelation passes as explained by bryant, where every pass squeezes out more by decorrelating (provided that there are more patterns to be found - white noise won't have any of course!), FLAC will try a different one instead and choose that if it improves.
Your knowledge about this topic really amazes me. Thank you very much for taking the time to explain stuff--I feel guilty since I shouldn't be spoon-fed information like this ^_^; That was really easy to understand, maybe partially because I have a broad but non-detailed understanding about the topic.
* I picked some nonsensically slow FLAC option set. Don't judge a codec by its most stupid setting.
Ah, yes, of course. I was just amazed that a 2-hour encode of a 3-minute song only managed to compress 600 kilobytes more compared to a 30-second encode. I was always intrigued by how FLAC at -8 encodes audio in a couple of seconds while WavPack -mx6b24cc (my current collection) always takes somewhere around half a minute per song. But knowing that spending 2 more hours does not improve things by a significant margin, I just came to the conclusion that WavPack's encode speed is acceptable as it is. Just like bryant said, we can do an arbitrary number of decorrelation passes so nothing is stopping us in making 256 passes and waiting for a 2-hour encode, but... I think I might go with -hx4m (or -x6 since the difference isn't huge) when I reencode my collection. I plan to stop using lossy / hybrid mode since it increases file sizes by a noticeable amount (around 1 megabyte per song) so I'll just use Vorbis when I transfer songs to my portable devices (I actually want to use Opus but I'm wary of the patent pool stuff going on. Unless things get clearer and Fraunhofer steps out of the way, I'm a bit cautious of using Opus) Besides, mobile players just ignore the replaygain tags I have on my WavPack files so I guess I'll just apply the gain when converting to a lossy format.
ffmpeg's WavPack encoder (don't use it, it only covers file format version 4 and has some quirks to it) has -compression_level 0 to 8. Try -compression_level 8 and compare to wavpack -hhx6. It will be slightly smaller (for one thing, it lacks the WavPack 5 block checksums), but the time taken is not worth it.
I try to use official encoders when possible. This is one reason why I don't use any of the other codecs even though they offer better compression ratios--because I avoid non-GPL programs. (I am not implying that ffmpeg is closed-source. Just that I avoid unofficial and non-GPL implementations.) I want to be able to access/play my files even after the apocalypse. So no OptimFROG or TAK for me. I have always used the -m flag when encoding for my collection but I never compared things with the ffmpeg implementation so that will definitely be fun.
As a FLAC user I would always want the "-m".
Yes, checksums for the win! I also use Btrfs as my filesystem so hard drive faliure or corruptions should be noticed if happening, especially since I'm too poor to buy a back up hard disk.
Even if you use WavPack 5 with the block-level checksums (which can be verified without decoding using wvunpack -vv), I would use the MD5 even if only to identify the files.
That is actually a very clever way to use checksums. Thanks for giving me the idea!

Re: WavPack decoding complexity vs FLAC

Reply #20
Here is an example that flac beats wavpack hhx6 with a 16-bit 96kHz dual mono chiptune without using --lax. With --lax it can even be smaller. Of course, it may not be the case with other chiptune files, 24-bit or not.

I want to play that game too  :))
1 868 458 bytes within subset (yours: 2 001 985)
Both figures after applying metaflac --dont-use-padding --remove-all to get everything equal.

(1 812 709 outside subset)


Edit: Chiptune signals are very peculiar indeed, and apparently weren't high on codec developers' radars:
1 900 633 for OptimFROG at --preset max --md5
2 030 216 for WavPack -hhmx6
2 187 250 for TAK -p4m -md5
2 499 712 for Monkey's "Extra High" (beating the 2 615 760 "Insane" which is also bigger than "High")
You and your apod magic :P Because the signal is >48KHz I thought I'd be able to beat you with a fixed encode of -8epl32r8 within subset as it looks like you've limited lpc to 12. But it was not to be, it took a very slow variable encode with output settings of -8epl32r8 to beat it:

1 863 521 subset variable

Re: WavPack decoding complexity vs FLAC

Reply #21
You and your apod magic :P
Blame drug dealer @ktf ! 8)

I actually didn't pick up that it was 96kHz and I could go even higher and stay within subset. So I used -l12 -r8 and ... some secret sauce.

 

Re: WavPack decoding complexity vs FLAC

Reply #22
too poor to buy a back up hard disk.
Uh-uh-uh-oh.
The two big sources of data loss, are a complete hardware disk crash, and the "WTF did I just do?!".

If you truly intend to live without backup - having your physical CDs for backup to be re-ripped if you need, damn I am not doing that job over again but I can afford not to - then you can save some of the work by backing up metadata. Fixing metadata takes a lot of time ...
If you have the space for an extra hybrid set, then with WavPack that is likely to be the easiest: copy over only the .wv files. They will have the metadata, and they will have the MD5 of the full audio. If you need to re-rip, you can match by MD5 (well assuming it behaves the same in lead-in/lead-out), and if you have already run it through CUETools you can match by AccurateRip ID or CTDB ID, or as a last resort by track number & number of samples.



Even if you use WavPack 5 with the block-level checksums (which can be verified without decoding using wvunpack -vv), I would use the MD5 even if only to identify the files.
That is actually a very clever way to use checksums. Thanks for giving me the idea!
As of now, OptimFROG, Monkey's and WavPack have that feature. So the slowest decoding are the quickest to verify.
(La, WMAL and MPEG-4 ALS do decode slower than WavPack, but nobody uses them ... nobody should use WMAL at least. And yes WavPack -hh is no slower decoding than ALAC's "fast" mode - which is fast only in encoding - but who cares.)

Slower-decoding formats implementing faster verification is just fine - they have more need for it. If you want to run a verification job, chances are you want to run it for a whole drive.