Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: New FLAC compression improvement (Read 46204 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: New FLAC compression improvement

Reply #100
when the potential gains are so small.
... and worth so little in terms of hard drive cost.

But humans still play chess. Sometimes on their smartphones, which have more computing power than Deep Blue which 25 years ago defeated Garry Kasparov.

And computers still play chess. Heck, in the major computer chess championship, they even play on when one has already won the match.

Re: New FLAC compression improvement

Reply #101
Hahah, yeah I don’t know if there will ever be any practical use to this but I guess my idea of “fun” is pushing the impractical to its limits. Some people like souping up a Honda Civic to drive like a racecar, I like seeing what can be done within the confines of the FLAC format, without breaking the standard or changing formats entirely. Different strokes for different folks…
 
As of now I use -8 -e -p as a standard across everything I do. On my old system it was pretty painfully slow and just compressing a 16/44.1 album could run slower than real time. A 24/192 release could take days on end. -8 was already fast enough to not even think twice about no matter the source file.
 
On my new system -8 -e -p runs in about the same amount of time it took my old system to handle a regular -8. Pretty amazing. Now I want to grind things down to a halt again!

Re: New FLAC compression improvement

Reply #102
On my new system -8 -e -p runs in about the same amount of time it took my old system to handle a regular -8. Pretty amazing. Now I want to grind things down to a halt again!

Have you tried  -8ep --lax -l 32 yet?
Music: sounds arranged such that they construct feelings.

Re: New FLAC compression improvement

Reply #103
On my new system -8 -e -p runs in about the same amount of time it took my old system to handle a regular -8. Pretty amazing. Now I want to grind things down to a halt again!

Have you tried  -8ep --lax -l 32 yet?

I have not, but I do appreciate the suggestion!
I should clarify, I don’t want to generate non-subset files or break the standard in any way that might cause unexpected behaviors/incompatibilities… I’m more interested in finding the best possible compression, just within the confines of an otherwise “normal” FLAC file. Would -8ep be as far as things can be pushed given those considerations?

Re: New FLAC compression improvement

Reply #104
I don’t want to generate non-subset files
If sample rate is > 48 kHz, you can use -l 32 and still stay within subset. You can get the "Lindberg" I have used for testing in this thread, for free from http://www.2l.no/hires/ .

-8pe -r 8 -l 32
would be quite slow. But then you have not accessed the bank of flac's apodization functions other than through -8. You will find some examples here in this thread, but with the git build you can as well throw in another "-A subdivide_tukey(<unreasonable number>)" just for the excitement of watching paint dry.



Re: New FLAC compression improvement

Reply #105
I don’t want to generate non-subset files
If sample rate is > 48 kHz, you can use -l 32 and still stay within subset. You can get the "Lindberg" I have used for testing in this thread, for free from http://www.2l.no/hires/ .

-8pe -r 8 -l 32
would be quite slow. But then you have not accessed the bank of flac's apodization functions other than through -8. You will find some examples here in this thread, but with the git build you can as well throw in another "-A subdivide_tukey(<unreasonable number>)" just for the excitement of watching paint dry.




Thank you, I will play around with that for a bit and see how things compare! :)

 

Re: New FLAC compression improvement

Reply #106
Now I want to grind things down to a halt again!
Can I then tempt you with the following output line that ffmpeg presented to me, after a day of hard work?

size=      72kB time=00:00:00.17 bitrate=3440.1kbits/s speed=4.08e-06x

If it goes on like this, it will get a full second encoded in less than three days. Is that "down to a halt" enough, even with data density as high as 96/24? ;)
(I used a long line with several slowing-down options, including the "-multi_dim_quant" option, which must be either a horrible piece of work or a successful trolling. Even a -compression_level 8 -multi_dim_quant 1 spends 41 minutes to encode a second. And compresses like flac -4 - to the extent you can conclude from a three-second corpus :-o )

Re: New FLAC compression improvement

Reply #107
Amusing, comparing apples with oranges. Nothing new here, move along.

Re: New FLAC compression improvement

Reply #108
Doing a web search, there is a seven year old ticket on this fault in ffmpeg's FLAC encoder, so this is not comparing apples to anything of nutritional value.

Anyway it was an attempted proof of concept to users who say they want the absolute best FLAC compression ratio - no they don't, as there is no practical limit to how slow encoding can get. The attempt was futile as goes the "best" (try for yourself with the attachment!), and there seems to be nothing ffmpeg can do to outcompress 1.4.0.

And I still wonder why it is necessary for you to spew toxic over HA without even offering solutions you claim exist. At that stage you just disappear, only to return with the same attitude. Oh, and do as I say not as I do.

Re: New FLAC compression improvement

Reply #109
Just tried random wav file with your beloved 1.4.0 flac encoder.
And ffmpeg beats it at ease at -compression_level 12. Even tried -8e flag.

But be arrogant as always and live with it, can not expect anything more from such kind.

Re: New FLAC compression improvement

Reply #110
And that ticket is irrelevant, but teaching pig new skills are impossible.

Re: New FLAC compression improvement

Reply #111
Thank you for the well-documented test using -compression_level 12 to compare CDDA (or something you deliberately left out was different).

People are poking fun at how a certain codec developer posted a test corpus of three given - in the early 2000s when when computing power was expensive. Fast-forward twenty years and a test corpus of thirty-eight given CDs and specifically sticking to subset, is irrelevant against a claim of "random wav file", unnamed and of undocumented resolution, and such that the combination of signal and setting cannot possibly be. If then you wanted a random file I attached one. Did you test it?


And, what is irrelevant about that ticket? It reports that a certain setting takes uselessly long time to encode, are you saying that speed is not relevant to an encoder?

Re: New FLAC compression improvement

Reply #112
Just tried random wav file with your beloved 1.4.0 flac encoder.

Just hacked into your computer to check that you really did do that, and it turns out that you completely made it up. You didn't encode a single thing.

Also, you really don't need to download porn nowadays.

Re: New FLAC compression improvement

Reply #113
So, I've taken randomly "Bravo Hits 57" compilation and created wav image files with external CUE files. It's modern mastering pop music with some acoustic tracks. Encoders are:

e:\WORK>flac -v
flac 1.4.0

e:\WORK>ffmpeg
ffmpeg version N-107264-g23fde3c7df-gc471cc7474+3 Copyright (c) 2000-2022 the FFmpeg developers
  built with gcc 12.1.0 (Rev2, Built by MSYS2 project)
Writing application                      : Lavf59.25.100
(ffmpeg was compiled on this machine couple of months ago)

Parameters for encoding were these:

flac -8 and flac -8e
ffmpeg -compression_level 12

Results are as follows:

Code: [Select]
Various Artists - Bravo Hits 57 CD1.wav	766,37 M
Various Artists - Bravo Hits 57 CD1 (ffmpeg-l12).flac 539,54 M
Various Artists - Bravo Hits 57 CD1 (flac -8).flac 538,24 M
Various Artists - Bravo Hits 57 CD1 (flac -8e).flac 538,12 M
   
Code: [Select]
Various Artists - Bravo Hits 57 CD2.wav	740,79 M
Various Artists - Bravo Hits 57 CD2 (ffmpeg-l12).flac 526,02 M
Various Artists - Bravo Hits 57 CD2 (flac -8).flac 525,00 M
Various Artists - Bravo Hits 57 CD2 (flac -8e).flac 524,91 M

So, to note, I didn't measure time, but flac -8 was subjectively fastest, I think ffmpeg was a bit faster than flac -8e. ffmpeg showed encoding speed as 40x, not bad.

To conclude, there may be cases where ffmpeg flac encoder will "win", but it all depends on type of music, mastering, noise levels. To conclude it has better compression all the time over official flac is stupid.
Error 404; signature server not available.

Re: New FLAC compression improvement

Reply #114
Tiny compression differences seem to be due padding, test on 43 minute 16bit 48kHz wav:
Code: [Select]
ls -lahS --block-size=K *.flac                                                                         
-rwxrwxrwx 1 b b 246494K Sep 20 15:14 flac14best.flac
-rwxrwxrwx 1 b b 246442K Sep 20 15:18 ffmpeg12.flac
-rwxrwxrwx 1 b b 246430K Sep 20 15:14 flac14bestNoPadding.flac
-rwxrwxrwx 1 b b 246321K Sep 20 15:14 flac14bestNoPadding8e.flac
PANIC: CPU 1: Cache Error (unrecoverable - dcache data) Eframe = 0x90000000208cf3b8
NOTICE - cpu 0 didn't dump TLB, may be hung

Re: New FLAC compression improvement

Reply #115
ktf & co killed -e with release 1.4.0, at least for CDDA; in my test, -8p was faster than -8e, and the difference from -8 twice as big. That is 2 * [small number] though.

Of course -e could be useful for slowing down flac if you want to give it a handicap in a competition against a weaker opponent ;D

Re: New FLAC compression improvement

Reply #116
On some files ffmpeg 12 can beat 1.4.0 -8. But on the same files 1.4.0 -8 -r 8 can beat ffmpeg 12, while still being significantly faster.

Re: New FLAC compression improvement

Reply #117
I once discovered that with ffmpeg's flac encoder, higher -lpc_passes doesn't necessarily translate to better compression (the phenomenon isn't much visible at "sane" number of iterations) - so during the holidays I went back to the flac-irls-2021-09-21.exe build and ran a few days of encoding tests - this time on the CDDA corpus in my signature.

tl;dr:
A "superior" choice of windowing functions gives a size advantage that the irlspost runs cannot catch up with - indicating thatboth the least-squares and least-absolute-value approaches are too vulnerable to the outliers that the windowing functions happen to discard.

What I did:
* picked a few settings as "starting point", and for each: added atop an irlspost(N) or irlspost-p(N) for N=1 to 9 (which is already beyond sanity I think) or to 14 or even to 19, and recorded the sizes per album
* did "the same" with ffmpeg, with compression levels 8, 8 with additional precision, and 12.

Some findings:
* Even more: Atop a "powerful" setting (example: -8p), increasing the passes count will improve file size until way past sanity.  (Exceptions there are, but for two thirds of the albums, the 19th & final made for the smallest files.) 
* Atop a "bad setting", on the other hand ... I chose -l5 -q5 -b4097 -A "irlspost(%l)" in order to fix coefficient precision and keep the "Rice partitioning equal" (i.e. no partitioning at all, that's equal).  The number of passes that made for smallest files were, per album, 6 4 4 5 5 6 5 4 5 4 4 5 4 6 9 4 4 3 3 6 4 4 4 5 3 2 9 8 4 3 9 3 4 9 4 4 5 3 (where 9 was max run in this part of the test).
* ffmpeg's "powerful settings" aren't equally robust as reference flac's.  Starting out with -lpc_passes atop -compression_level 12, a mighty thirteen passes do more often than not generate larger files than a mere two passes do.  Not that it is much
* Both with ffmpeg and with reference flac's "not so powerful" settings, the following holds with very few (and small in size difference) exceptions: Size has a \_/ relationship with the irlspost count.  Increasing the count would first improve monotonically to a minimum and then make monotonically worse.

I could post some size data, but ... nah, I don't think they are quantitatively much to gain insight from.

Learnings?
Above I indicated the following qualified guess:
The reason the irls passes "cease to improve" is that also this method needs a good choice of apodization function.  (Here I have outright presumed that the IRLS iterations are on the windowed data - @ktf, is that correct?)
Assinging more windowing functions is a "kinda brute-force" way to arbitrarily remove or downweigh parts of the signal, and this suggests that there might be some theoretical benefit from (1) initial run, (2) re-windowing based on what are the actual outliers.  Emphasis on theoretical because nothing says this is an efficient use of CPU time - the use of windowing functions might accomplish "nearly the same much faster" for all that I know.

(Hm, ktf: if you feel like compiling the irlspost routine into the 1.4.x, I could redo the test with current -8 windowing - if only to confirm the suspicion.)