http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac (http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac)
Any idea why the performance of most lossless codecs is so horrible on this very simple signal?
I would expect prediction to be almost perfect for it.
But:
FLAC bitrate:580kbps
APE bitrate:750kbps
WavPack: also >500kbps
ZIP bitrate: 21kbps
7z/RAR bitrate: 2.5kbps
I just tried it with optimfrog using the 'normal' compression setting, and the output size was 51.9 KB.
http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac (http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac)
Any idea why the performance of most lossless codecs is so horrible on this very simple signal?
I would expect prediction to be almost perfect for it.
But:
FLAC bitrate:580kbps
APE bitrate:750kbps
WavPack: also >500kbps
ZIP bitrate: 21kbps
7z/RAR bitrate: 2.5kbps
bzip2 131bps
I think the problem is with the use of Rice-coding. My understanding is that it was designed for normally distributed residuals, and is decidedly sub-optimal where you have mostly zeros with an occasional + or - 65535.
Good test sample!
--John
Those last few percent of compression that the best lossless compressors get is at the cost of an enormous amount of complexity in the prediction algorithms. They look at dozens of previous samples and have dozens of adjusting coefficients. So, a single transition will generate a whole train of non-zero residual values to encode. Another version that simply used the previous sample as the prediction would encode this much better, but would virtually never work better on any sample of real music (in fact, WavPack's "fast" mode compresses this sample to about half the size of the "high" mode for this reason).
Also, no lossless compressor is going to take advantage of exactly repeating sequences of numbers like a general data compressor, because these never occur in audio data and require completely different coding algorithms (i.e. dictionary based, no Rice coding).
An "ideal" compressor could be made to try several different simple and complex algorithms to detect cases like this, but most people would not be willing to put up with the encoding time penalty unless it improved performance on any "real" samples.
BTW, your sample scared my cat!
http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac (http://sjeng.org/ftp/vorbis/Garf_Bl33p!.flac)
Any idea why the performance of most lossless codecs is so horrible on this very simple signal?
I would expect prediction to be almost perfect for it.
But:
FLAC bitrate:580kbps
APE bitrate:750kbps
WavPack: also >500kbps
ZIP bitrate: 21kbps
7z/RAR bitrate: 2.5kbps
I think Bryant's right. Note that for regular signals like this, tuning the blocksize in FLAC gets you a lot, e.g. "flac -8 --lax --blocksize=384" is about 169kbps. But BWT compressors like bzip2 will really kick on this signal, they'll probably use just a few bits to encode a single cycle, and a little for the dictionary.
Josh
Garf, I haven't looked at the WAV yet, but that FLAC file is very very strange ! It looks like a killer sample for RAR !
Original (FLAC) = 4'394'931 bytes
RAR (Best) => 80'162 bytes
RAR (Good) => 81'523 bytes
RAR (Normal) => 27'680 bytes
RAR (Fast) => 55'240 bytes
RAR (Fastest) => 105'302 bytes
Just looking at the FLAC output with a hex editor shows that it's extremely redundant.
I win ! B)
My packer projects brings the .WAV down to 76 bytes.
Compression ratio: 1:139264
Those last few percent of compression that the best lossless compressors get is at the cost of an enormous amount of complexity in the prediction algorithms. They look at dozens of previous samples and have dozens of adjusting coefficients. So, a single transition will generate a whole train of non-zero residual values to encode. Another version that simply used the previous sample as the prediction would encode this much better, but would virtually never work better on any sample of real music (in fact, WavPack's "fast" mode compresses this sample to about half the size of the "high" mode for this reason).
Also, no lossless compressor is going to take advantage of exactly repeating sequences of numbers like a general data compressor, because these never occur in audio data and require completely different coding algorithms (i.e. dictionary based, no Rice coding).
An "ideal" compressor could be made to try several different simple and complex algorithms to detect cases like this, but most people would not be willing to put up with the encoding time penalty unless it improved performance on any "real" samples.
BTW, your sample scared my cat!
Just looking at the FLAC output with a hex editor shows that it's extremely redundant.
This problem can't be avoided adding a function, in lossless codecs, that checks if the output file is redundant as numlock noted and changes the compression mode used?
Just my 0,0000002 euros
edit = well the problem of time penalty will still be there...
Garf, I haven't looked at the WAV yet, but that FLAC file is very very strange ! It looks like a killer sample for RAR !
Original (FLAC) = 4'394'931 bytes
RAR (Best) => 80'162 bytes
RAR (Good) => 81'523 bytes
RAR (Normal) => 27'680 bytes
RAR (Fast) => 55'240 bytes
RAR (Fastest) => 105'302 bytes
Just looking at the FLAC output with a hex editor shows that it's extremely redundant.
If we're into RAR killer samples, I remember seeing some files (it was some Unreal Tournament mod IIRC) where RAR miserably lost it to ZIP, even to its own ZIP compressor. I can dig them up if anyone interested.
If we're into RAR killer samples, I remember seeing some files (it was some Unreal Tournament mod IIRC) where RAR miserably lost it to ZIP, even to its own ZIP compressor. I can dig them up if anyone interested.
Your file might make RAR heuristics fails, and thus pick the wrong algorithm
If you still have it I'd be interested, yes.
Sorry, Garf, but why you have uploaded FLAC instead of ZIP than?
-Eugene
Just looking at the FLAC output with a hex editor shows that it's extremely redundant.
This problem can't be avoided adding a function, in lossless codecs, that checks if the output file is redundant as numlock noted and changes the compression mode used?
Had the same idea. Some ultrasuperduperhigh compression mode would run two compressors at once, the regular audio compressor and e.g. a ZIP compressor, and then store the blocks that are compressed the most.
Sorry, Garf, but why you have uploaded FLAC instead of ZIP than?
At least that sample made me understand the true performance of my dial-up modem (56k):
Your modem uses compression, you could say it zips-on-the-fly.
I didn't realize it zipped so well until after I uploaded it.
Zero-knowledge compressors have a big advantage on this "audio" sample: they don't believe it's audio.
After seeing a short header, followed by a few alternances of:
FF7F (ie: 32767 in hex & reversed byte-order) repeated 102 times
and:
0080 (ie: -32768 in hex & reversed byte-order) repeated 100 times
... the packer will suppose this would continue for a long time.
For example, in a well-tuned arithmetic coder you can encode a whole alternance (2*102 + 2*100 bytes) in a matter of bits.
Using arithmetic coding, if you remove the WAV header one could encode this whole file in less than 10 bytes, and still be able to handle any input file (or unexpected data at the end of this file) correctly.
Even if you know the repeating sequence has a very high probability to appear, you still must reserve those 0.000001% probability for all other cases. That's also why the file would take 10 bytes, and not zero bit
Huffman encoding, on the other hand can only assig whole bits, ie: it can assign "0" to the most common sequence, and "1xxxxxxxxxxxxxxx..." to all possible others.
Yes, and bzip is the winner here. It compressed the file to 983 bytes. WinRar (with PPM forced) compressed to 4755 bytes.
Such results are pretty obvious and very similiar to those of Short_Block_Test_2, which is very sparse as well: http://eltoder.nm.ru/temp/Short_Block_Test_2.res (http://eltoder.nm.ru/temp/Short_Block_Test_2.res)
-Eugene
RAR 3.x can shrink the wav further to ~4756 bytes, when forcing markov ("text") compression at maximum order (99).
Edit:
LOL eltoder, what did you do to RAR, to gain that extra byte ?
About bzip2, cool but that's still far away from my 76 bytes.
RAR 3.x can shrink the wav further to ~4756 bytes, when forcing markov ("text") compression at maximum order (99).
Edit:
LOL eltoder, what did you do to RAR, to gain that extra byte ?
About bzip2, cool but that's still far away from my 76 bytes.
The maximal order is in fact 63, not 99. And the best results are obtained at order 60
And could you share you great program with us?
-Eugene
The maximal order is in fact 63, not 99. And the best results are obtained at order 60
And could you share you great program with us?
-Eugene
Oh. I should have tried all possible values then
Well, why not.. (thanks for the compliment btw) but I'm a bit ashamed, it's awfully slow, and optimized for data, not audio (except for Garf ® © audio of course). Also the name of input and output files is hardcoded in the source..
Edit: Btw I absolutely
love your signature !
it's awfully slow, and optimized for data, not audio (except for Garf ® © audio of course). Also the name of input and output files is hardcoded in the source..
I absolutely need this program!
And I like my sig too, but now I think it's a bit long
-Eugene
Ok, I'll send it to you tonite
Be advised though, to compress such files well I had to hack the probability modelling curve.. so you (or we) would have to find the parameters again.
Also, it's a bit-based program (it compresses bits, not bytes). However, it uses heuristics and tries to still exploit byte, word, dword alignments, when it is possible.
The probability estimation part is a mess, it uses variable-length PPM, hashtables, dynamic decay curves, etc etc.. It looks more like a nuclear physics simulator than a packer. I think a complete rewrite should be done ASAP
IIRC, I have an improved arithmetic coding backend, and crazy ideas lying around, that I could use in the new project (when time allows).
Edit: I'll have a bit more time when my laptop's back from Shinjuku-Ku with a new hdd.
SBC Archiver 0950 beta:
sbc.exe c -m3 -b63 newarchive Garf_Bl33p!.wav
compresses to 217 Bytes...
Very impressive. It blasts away RKIVE, 777... even though the latter uses arithmetic coding.
Edit: SBC is one of the 1st archivers to associate block-sorting and arithmetic coding
All of them do. Imagine PPM or BWT without arithmetic coding?
-Eugene
All of them do. Imagine PPM or BWT without arithmetic coding?
-Eugene
Well, bzip2 does use BWT and regular Huffman... but this is for performance and patent issues, of course.
Btw, have you made a compressor yourself Eugene ?
All of them do. Imagine PPM or BWT without arithmetic coding?
-Eugene
Well, bzip2 does use BWT and regular Huffman... but this is for performance and patent issues, of course.
Btw, have you made a compressor yourself Eugene ?
I thought bzip2 uses some sort of rangecoding. Seems I was wrong.
At least, most use rangecoding after BWT
Compressor? I've tried several times, but never got anything usefull. LZW and PPM like things were rather funny, though
-Eugene
I wrote a LZ77 or LZW program (can't remember) once, but its performance was catastrophic: it inflated files most of the time
Garf, could you please make this sample available again? I wanted to try it with WavPack4's assymetrical modes. (quoting David's readme: "Because the standard compression parameters are optimized for "normal" audio, this option works best with "non-standard" audio where it can often achieve enormous gains.")
also:
RAR (Best) => 80'162 bytes
RAR (Good) => 81'523 bytes
RAR (Normal) => 27'680 bytes
RAR (Fast) => 55'240 bytes
RAR (Fastest) => 105'302 bytes
The reason is simple. RAR's audio compression routines automatically kick in at Best and Good modes. So at Best and Good it tries to compress it as audio, and at Normal, Fast and Fastest it compresses it as general data.
In "Auto" mode WinRAR will decide when to use the audio compression depending on source data and only if "Good" or "Best" compression method is selected.
Regards;
Roberto.
Garf, could you please make this sample available again?
I still have that file and just ran it through a bunch of lossless audio compressors (plus the old and new WavPacks, of course):
Original File
-------------
10,584,044 garf.wav (orginal file)
Monkey's Audio 3.97
-------------------
5,642,404 garf-xh.ape (extra high mode)
5,494,040 garf-h.ape (high mode)
4,530,440 garf-n.ape (normal mode)
4,527,024 garf-f.ape (fast mode)
FLAC
----
4,425,222 garf-1.flac (mode 1)
4,397,071 garf-5.flac (mode 5)
4,392,479 garf-8.flac (mode 8)
OptimFROG 4.507
---------------
53,186 garf.ofr (default mode)
2,985 garf2.ofr ("newbest" mode)
RKAU 1.07
---------
164,261 garf-1.rka (fast mode)
590,353 garf-3.rka (high mode)
LA 0.4
------
3,343,564 garf-h.la (high mode)
2,364,142 garf.la (default mode)
WavPack
-------
2,134,878 garf-ff.wv (3.97, very fast mode)
1,476,068 garf-ff.wv (4.0a2, very fast mode)
766,124 garf-ffx4.wv (4.0a2, very fast mode, extra processing)
1,804,879 garf-f.wv (3.97, fast mode)
1,503,778 garf-f.wv (4.0a2, fast mode)
769,248 garf-fx4.wv (4.0a2, fast mode, extra processing)
4,034,834 garf-n.wv (3.97, default mode)
1,741,260 garf-n.wv (4.0a2, default mode)
770,792 garf-x4.wv (4.0a2, default mode, extra processing)
4,035,317 garf-h.wv (3.97, high mode)
2,675,828 garf-h.wv (4.0a2, high mode)
768,158 garf-hx3.wv (4.0a2, high mode, extra processing)
The first interesting point is that WavPack, RKAU, LA and Monkey's Audio all have the characteristic that the "higher" modes do worse than the "faster" modes.
Second, the "extra" processing mode does significantly help WavPack's performance with this sample.
Finally, OptimFROG is pretty amazing. Good job, Florin!