Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Retune of FLAC compression levels (Read 18619 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: Retune of FLAC compression levels

Reply #25
I don't think ~ 0.3% improvement on average (...) with no slowdown is in any way minimal? Also -0 improves more than 1% with no slowdown. Lots of topic here discuss -p and -e, and those usually result in gains far less, at the cost of quite some slowdown.

I agree it might not be particularly necessary, but I don't think it is a waste of time.
Agreed, some of your new presets are quite a bit more efficient (hadn't noticed that earlier in its full extent, sorry about that), but why this zig-zag speed/compression ratio curve for the encoder?

When, as you wrote, all FLAC presets decode at very similar (and insanely fast) speeds, why not design a convex-hull preset curve? e.g., take your retuned presets 0, 1, 3, 6, 7, and 8 (or maybe a bit more compression efficiency for 8 ) and find new presets 2, 4, and 5 which lie along the curve, i.e. between the neighboring presets in both encoding speed and compression ratio?

By doing that change, you wouldn't even have to ask for comments here, I think everyone would accept such a change. Especially since you already managed to achieve both speed and efficiency improvements on some presets with your retuning.

Examples for (mostly) convex-hull preset curves in video coding:


https://spin-digital.com/wp-content/uploads/2023/02/compression_efficiency_bd-rate_encoders_4K-768x406.

Chris
If I don't reply to your reply, it means I agree with you.

Re: Retune of FLAC compression levels

Reply #26
why this zig-zag speed/compression ratio curve for the encoder?
Yes, it resembles TAK -p#e and -p#m settings, doesn't it?

I suppose I might be first to mention that FLAC 1.42 -0 compresses better than -1 on a lot of material... and faster.
"Something bothering you, Mister Spock?"

Re: Retune of FLAC compression levels

Reply #27
Also in the IETF draft, ktf&co will recommend that whenever one wants maximum compatibility with decoders that suck, then stick to 1152 or 4096 samples in a block.  So while 2304 seems to improve, there is a case against it - at least for -0.
The IETF document actually states that 2304 is a common blocksize.
https://www.ietf.org/archive/id/draft-ietf-cellar-flac-05.html#name-blocksize-bits
Are there any real example (name and version of hardware or software decoders) only works with 1152 and 4096 but not 2304?

Here are some previous test regarding the use of different blocksizes on the lower presets:
https://hydrogenaud.io/index.php/topic,123025.msg1018543.html#msg1018543

Re: Retune of FLAC compression levels

Reply #28
@bennetng : you are right, I cannot read. (My tests in #263 above your link, but those would have to be redone now.)

@C.R.Helmrich on convex graphs: decoding complexity is also a point. So 0,1,2 work in a restriction to fixed-only predictors and -2 is not "good" in any other sense. And 6,7,8 go to highest prediction order. So the "relevant" test for convexity is within each triplet: 1/4/7 below the 0&2 midpoint / 3&5 midpoint / 6&8 midpoint.

@Destroid : Interesting? Are you using a spinning drive where the reduced filesize also reduces write time?

Re: Retune of FLAC compression levels

Reply #29
I am not confident that the -l 32 for high resolutions is "ready yet".  For testing I picked nine 96/24 signals, trying to get around a quarter of an hour each file by merging songs from the same source, and while most behave nice and monotone, the exceptions did pull the average quite a lot on this small corpus, where -8 -l <N> ended up smallest at N=19.
Here are the two bad guys with the total:

.Craters trackTTP: Tx20 EPAll 145 minutes
duration%10.35%12.24%100.00%
-8e52.43%52.51%53.75%
-8 -l 856.26%57.80%55.18%
-8 -l 954.74%56.14%54.71%
-8 -l 1053.88%55.26%54.44%
-8 -l 1153.42%54.62%54.26%
-8 -l 1253.32%54.53%54.21%
-8 -l 1353.23%54.31%54.15%
-8 -l 1453.15%54.05%54.08%
-8 -l 1553.13%53.92%54.08%
-8 -l 1653.09%53.78%54.04%
-8 -l 1753.09%53.62%54.04%
-8 -l 1853.10%53.54%54.03%
-8 -l 1953.09%53.45%54.02%
-8 -l 2053.11%53.40%54.02%
-8 -l 2153.11%53.45%54.03%
-8 -l 2253.12%53.40%54.03%
-8 -l 2353.14%53.52%54.05%
-8 -l 2453.15%53.57%54.06%
-8 -l 2553.16%53.66%54.07%
-8 -l 2653.19%53.74%54.09%
-8 -l 2753.22%53.76%54.10%
-8 -l 2853.25%53.85%54.12%
-8 -l 2953.27%53.84%54.12%
-8 -l 3053.28%53.90%54.13%
-8 -l 3153.29%53.92%54.13%
-8 -l 3253.30%53.95%54.14%
The two tracks:
* Admittedly, the Tx20 EP ( https://teaparty.com/tx20 ) was one signal I knew has had some surprises. But I did not remember precisely what surprises except that they made big impact. After testing, going to https://hydrogenaud.io/index.php/topic,120158.msg1014183.html#msg1014183 and checking, it did improve well going to -l 13 and 14 ...
* Craters: Batagaika. Instrumental and distorted, picked from https://doomedandstoned.bandcamp.com/album/doomed-stoned-the-instrumentalists-vol-i for a quite arbitrary reason: I was looking for something around 15 minutes, and this is 15:01. (Oh, in addition to being 96/24 and available for free in case anyone is interested.)

Apart from that:
* Merged to one track: 15 minutes jazz from the now-defunct 2L test bench
* Merged to one track: 19 minues classical from the same
* Merged to one track: 16 minutes from Kimiko Ishizaka's three free Bach piano recordings, https://kimikoishizaka.bandcamp.com
* Kayo Dot: The Second Operation (Lunar Water), 13 minutes (selected for being longest on album, the album was included in the tests at the HA link above)
* Cult of Luna: Lights on the Hill, 15 minutes (also selected for being longest on album - this is however not the same Cult of Luna album as I used in the above HA link)
* Hooded Menace: Elysium of Dripping Death (selected for being longest 96/24 on this free compilation https://relapsesampler.bandcamp.com/album/relapse-sampler-2015 )
* The Stargazer's Assistant: Birth of Decay, 18 minutes (the only 96/24 on this free compilation: https://houseofmythology.bandcamp.com/album/watch-pray-five-years-of-studious-decrepitude . Also, it isn't metal, which the corpus has enough of - it is more dark ambient / electronic.)
* And a tenth signal actually, but 88.2/24 and only three minutes in total so I didn't bother to take it out of the total when I wrote "nine" 96/24 above. Anal Trump: That Makes Me Smart!. "You Suffer", anyone? Also weird results.


Small corpus of not-exactly-chartbusters, but maybe tells a story.

Also, observation: some signals "benefit from" -l <odd number>, some from -l <even number>. Hm.

Re: Retune of FLAC compression levels

Reply #30
Quote
FLAC will have its compression levels similarly grouped. levels 0, 1 and 2 will be the fastest decoding, 3, 4 and 5 will be slighly slower decoding, 6, 7 and 8 will be decoding the slowest.
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation. I don't really see the point for changing that, which would mean having to change most webs referencing FLAC usage. (even if its just a 15% difference, unless I didn't understand it right)

If compression and decoding time may be improved, great.

But I agree with some of the comments here than making a 0.3% compression or small encoding time improvement makes no sense when we are at 2023 and -official- FLAC remains single threaded (and lets not talk about GPU acceleration).  It has been mostly the same for years.... and while it's always great to optimize current code or presets, it still has almost zero impact on real usage.

Well people can put their time on whatever they want, but it would clearly be better spent on things which bring real improvements to the table.

Re: Retune of FLAC compression levels

Reply #31
I am not confident that the -l 32 for high resolutions is "ready yet".
Seems to coincide with some of my previous tests about the diminished (if not negative) return of -l32.

Higher -l for hi-res may work better with subdivide_tukey(6) or subdivide_tukey(5) + three simple windows of your choice, and perhaps with -r8 and -b8192.

Re: Retune of FLAC compression levels

Reply #32
I would keep just -0 and -8, where everything between -1 and -8 is treated as -8, this way I wouldn't need to recompress every FLAC file I download from bandcamp.com =)

Re: Retune of FLAC compression levels

Reply #33
@Destroid : Interesting? Are you using a spinning drive where the reduced filesize also reduces write time?
I was reporting on the 32-bit compiles on a RAM Drive. Sorry to have not specified before. Anyway, I use a script to bench all modes (16-bit, stereo, 44Khz material).

Porcus is the real benchmarker IMO :)

Edit: typo, and Win32 specific plus (if you want GCC VS. Intel Win32 stuff [just for giggles], please let me PM some numbers to know).
"Something bothering you, Mister Spock?"

Re: Retune of FLAC compression levels

Reply #34
version       enc. speed    size
1.4.2 -8       1310x         11,792,799
1.4.2 -7       1677x         11,803,842
retune -8     1794x         11,805,269
Similar results here with CD material. I don't think -8 needs speed ups in trade for compression.
For my corpus, compression did in fact improve instead of worsen. I'll reconsider.

Maybe we could keep the numbered levels as-is but also add new, non-numbered presets in addition to those levels so that anyone interested can use them, and people attached to their old presets for whatever reason won’t miss them.
Most people are asking for less presets, not more  :))

Agreed, some of your new presets are quite a bit more efficient (hadn't noticed that earlier in its full extent, sorry about that), but why this zig-zag speed/compression ratio curve for the encoder?
This is how TAK works. Each group of 3 presets (each 'zig' if you will) belongs to a certain decoder speed. Only, the difference in decoding speed are really small. That is the case with TAK as well though.

I could consider really optimizing for encoding speed versus compression (convex hull), ignoring decoder speed. The differences there are only very small after all.

I suppose I might be first to mention that FLAC 1.42 -0 compresses better than -1 on a lot of material... and faster.
That is something that should have been fixed here with 1.4.0. Are you sure that you didn't see that behaviour with older versions of FLAC? If you indeed have a lot of material where FLAC 1.4.2 has -1 produce larger files than -0, I'd like to know what kind of audio that is. I might be able to tune things it little better with that knowledge.

I am not confident that the -l 32 for high resolutions is "ready yet".
With which compile did you test? I did make a slight change in the order guessing algorithm for this proposal.

Quote
FLAC will have its compression levels similarly grouped. levels 0, 1 and 2 will be the fastest decoding, 3, 4 and 5 will be slighly slower decoding, 6, 7 and 8 will be decoding the slowest.
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation.
So.... you have a problem with me making decoding of certain presets *faster* because now an old rule of thumb is now a little bit less true? This proposal speeds up decoding of presets 0, 1 and 2 by about 8% (assuming MD5 is checked, otherwise it is more) and slows down presets 6, 7 and 8 for audio with a samplerate of > 48kHz by about 2%. Is that really that much of a dealbreaker? There was already a difference between the fastest and slowest presets of about 8% previously, this is now doubled, but more in the direction of going faster than going slower.

Quote
Well people can put their time on whatever they want, but it would clearly be better spent on things which bring real improvements to the table.
On average I spend 5 hours a week working on FLAC, unpaid. Are you saying I really need to put all that time into "real improvements", and I cannot once in a while do something fun? If everyone would have only done things that were "necessary" we would still be stuck in the stone age.
Music: sounds arranged such that they construct feelings.

Re: Retune of FLAC compression levels

Reply #35
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation.

No, this is wrong, even if the "aprox." would now amount to a doubled difference due to -0 to -2 speeding up. Even if you don't notice the improvements - because still they decode approximately equally fast for practical purposes - then don't knock them.

The thing is, -0 to -2 do decode slightly faster already and always have - because they are encoded using only the fixed predictors. And -0 and -3 decode faster because you don't have to convert joint stereo to dual mono. Up to -6, the prediction order of up to 8 (meaning a sample is calculated from up to the eight previous ones) is less computationally demanding than -7 and -8, where the order is at most 12. Even using up to 32 doesn't make that much of a difference.

The reasons you have "always" heard that decoding time does not depend on preset, are:
* When reading from a spinning drive, reading a larger -0 encoded file takes more time. The practical diffences could go the other way back in the day.
* In any case, "any" FLAC decodes so fast you would hardly notice the difference - especially compared to the "symmetric" codecs.
* For the symmetric codecs (Monkey's, WavPack before -x, OptimFrog at the same time), compression time and decompression time would follow each other in a nearly 1:1 manner. Indeed, Monkey's still takes more time to decompress than to compress.
In comparison, FLAC depends so little upon encoding parameters that it was fair to say "the same".

Re: Retune of FLAC compression levels

Reply #36
@ktf : I used the build you attached in this thread.

Also, testing on a few files (the "*j*.wav" in my signature), new -8 seems to produce slightly larger files than new -8r6 -A subdivide_tukey(3), which isn't so strange - but the latter also gives larger files than 1.4.2 at -8. That must be due to the prediction order selection then? Or did you also tweak the selection of Rice partitioning and/or exponent?

Re: Retune of FLAC compression levels

Reply #37
That must be due to the prediction order selection then?
Yes. I guess I need to take a closer look at that.
Music: sounds arranged such that they construct feelings.

Re: Retune of FLAC compression levels

Reply #38
@ktf regarding -0 vs. -1 on Win32:

The -0 compression over -1 advantage is not huge, yet, it is noticeable. This is with CD 44khz stereo material. All of the git 1.42 git builds have identical results. :shrug:

I am at a loss for what to explain.
"Something bothering you, Mister Spock?"

Re: Retune of FLAC compression levels

Reply #39
I am at a loss for what to explain.
My tests show -1 compressing better than -0 at pretty much all audio material I have. So, what kind of music do you see -0 having an advantage over -1? I'd like to investigate.
Music: sounds arranged such that they construct feelings.

Re: Retune of FLAC compression levels

Reply #40
CDDA results, likely confirming that the change in LPC order selection does matter.  Which is not to say it is a bad thing, given the speed-up.
Also -r8 seems to not be worth it.
Corpus: 38 CDs from my signature. Not reliably timed, for that I have to leave the computer untouched and run repeats.


Baseline is current -8.  Run with 1.4.2 (x64), takes ten minutes and a half.  About the same time is the first of these:

* -8r6 -A subdivide_tukey(4).  Not the first that I tested, but I put it here because it takes pretty much the same time as 1.4.2 at -8. 
And compresses 0.02% worse.

* retune -8r6  -A subdivide_tukey(3). That is, the same parameters as 1.4.2 -8, so the changes are in the LPC order selection algorithm.
Considerably faster: eight minutes.
Every file is bigger than 1.4.2 -8, but only slightly so: none hit the 0.1 percent difference mark. The classical music increases by 0.042 percent, the heavier rock by 0.026 percent, the "other" in between.

* old -7
Maybe half a minute faster than retune -8r6  -A subdivide_tukey(3).  Bigger files, except some classical music.  The classical section is 5 parts per million bigger.
That means it is about the speed of retune -8r5 -A subdivide_tukey(3), see the final comparison.


Then the impact of "-r":
Above I did the retune -8r6 -A subdivide_tukey(3) (= old -8 options).  Changing the "r" to 7 or 5 costs/saves half a minute (atop eight minutes).  Impact:
* The "r" to 7: ten parts per million.  One album as high as 0.011 percent. Eight CDs (six classical and two metal) ended up with exactly the same number of bytes.
* The "r" to 5:  62 parts per million.  One album (a different one!) up by 0.080 percent.

One final comparison:
Since retune -8r5 -A subdivide_tukey(3) is about the same time as 1.4.2 at -7, what is the difference?
retune -8r5 -A subdivide_tukey(3) produces slightly smaller files: 0.019 percent.
The difference is about zero for classical music; those files increase by 4 ppm (the impact of the -r is about 10 ppm, and the rest makes for -6).
Driving the difference in favor of the retune, are Kraftwerk and Arman Van Helden, they are electronic driven and are the ones that benefit most from increasing the "-r". You'd expect then that they would lose from the -r5 etc. setting? No, they benefit even more from the subdivide_tukey(3) and whatever other changes you made.

Re: Retune of FLAC compression levels

Reply #41
Please disregard my prior comment on -0 vs. -1 as my own misinterpretation of the file ratio.

If there were any trends noticed it might have been files processed with LossyWAV (and the differences were very tiny and probably a byproduct of using -b 512).
"Something bothering you, Mister Spock?"

 

Re: Retune of FLAC compression levels

Reply #42
I loosely tested fixed predictors. Don't know if I can trust the timing differences, but fwiw: Block size 2304 looks good in this build too.

But I guess there must be some "executive decision on principles" under all this, and maybe there is little use in trying all sorts of timing tweaks until those are sorted out.

So @ktf , could you provide some input on the following - including "put off that test until next build is posted here" if applicable:

* -0/-1/-2: is it "1152 or 4096, nothing else"? (If so I won't do more 2304s on more computers.) And it is clear that nobody needs -0 to be dual mono? (If so ... no need to test dual mono, it performs bad.)

(* -3: I kinda feel less worried over joint stereo here. If some device is crippled enough to need dual mono, then would it even use LPC frames?)

* -4 vs -5 vs -6: You are proposing to change -5. 1.4.2 at -6 isn't good IMHO, so proposing that as a new default does call for testing I think.
But someone in the system might have made a decision what is more "sacred", if anything: The default setting, or default being "-5"?
Arguably, if one wants to change the default setting one might as well let new default be "new -4" if that is more convenient. The only ones who should care are those who encode with an explicit "-5", and if they give explicit parameters they could read manuals.

* -678: 1.4.2 was a slowdown (the double precision fix), so to ask outright: does -8 have to become faster? Do you need to stop the complaints about 1.4 being slow? If that is a must, then it looks good - but if on the other hand you do not want complaints that "1.4.2 compressed better!!" (from people who don't calculate ratios and see that the change is not much to whine about), then more tweaking could be done.

* The -l32 on high resolution ... I don't think it is ready. Not as high as 32. Could do more testing, but if you are already reassessing the order selection algorithm, then I won't bother to do more testing yet. (An aside, your explanation at the bottom here with link to code seems to explain a jump between 16 and 17 - but the jump showed up between 15 and 16.)
Also the concern about "-8e", but that is just as much over "-8p", those seem to be quite comparable in time consumption.
A note on why prediction order matters: In the short 96/24 test above, -8pl17 and -8el16 took the same time, the latter improving 0.17% over the former, which in turn improved 0.06% over -8l17.
Long rambling but question was: are you already working on the order selection code?

* Is a "-9" off the table? Could even be undocumented in the short help. I was thinking:
"-9 (experimental): same as -8 for standard resolutions. For higher resolutions, employs settings likely to change in future versions."

Re: Retune of FLAC compression levels

Reply #43
The 20 albums, 18,6 GB, 24-96/88.2 i once used for RG testing to -8p multithreaded.

1.42
4.52 min 19.713.856.336 Bytes

flac-retune
3.51 min 19.747.528.116 Bytes

The average is pretty much the same but 1 album differs very much. 728MB against 772MB. The retune one is the big one.
Edit: did read one wrong, sorry...
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: Retune of FLAC compression levels

Reply #44
I might have found something relevant (after confusing myself over the fact that the retune defaults to using subdivide_tukey(2)).
I ran both 1.4.2 and the retune with -mr6 -l12 -A tukey(5e-1) and <same but -A subdivide_tukey(2) and (3)>.
And then flac -a and grepped the analysis on order=12, order=11 etc.

Found: going up to subdivide_tukey(2) (or higher) makes the retune build "avoid" the highest LPC orders. Numbers are line counts for
tukey(5e-1) resp subdivide(2) resp (3)
Order 12:
852336 resp. 857902 resp. 734070 for 1.4.2.
829521 resp. 539591 resp. 330633 for the retune, quite a reduction.
For orders 10, 11, 12 combined:
1693524 resp. 1705403 resp. 1690185 for 1.4.2
1662904 resp. 1269961 resp. 1293984 for the retune - again, quite a reduction.

I could calculate averages, but I am anyway not sure what to make out of this - are those top predictors really significant? (I mean, "some of them are", but are only the insignificant ones dropped?)
But there is something going on. Not unlikely it is a good thing for speed ... ?

Re: Retune of FLAC compression levels

Reply #45
Just to make is clear what I was suggesting with only focusing on 3 presets and mapping the presets up to either 2,5 or 8 was to make is easier on ktf and other FLAC devs to refine the presets.

I've done a test comparing the tuned version against 1.4.2 stock. This test is the average bitrate over 16919 tracks. The encoding time for the tuned version at preset 8 compared to stock preset 8 was about an hour quicker, less noticeable difference as the presets lowered.

PresetTunedStock
01012 kbps1037kbps
11010 kbps1013 kbps
21009 kbps1011 kbps
3960 kbps986 kbps
4957 kbps959 kbps
5955 kbps957 kbps
6957 kbps955 kbps
7955 kbps953 kbps
8953 kbps952 kbps
I'm double checking the results for tuned presets 4-7 to make sure i didn't make a mistake.
Who are you and how did you get in here ?
I'm a locksmith, I'm a locksmith.

Re: Retune of FLAC compression levels

Reply #46
The main gripe I have with the current state of the presets is that -1 and -4 use -M which only work in a streaming context and adaptive mid side doesn't seem very useful anyway. Is -M still relevant in some context? If not I propose we rip it out of the codebase and have -M be an alias for -m. Adaptive midside adds a chunk of complexity for questionable benefit, IMO the juice is not worth the squeeze.

If -M stays maybe just get rid of it from the presets.

Re: Retune of FLAC compression levels

Reply #47
and adaptive mid side doesn't seem very useful anyway
How is it not very useful?

It gets 90% of the improvement of 'exhaustive' mid-side at 10% of the encoding time.

Adaptive midside adds a chunk of complexity for questionable benefit
I think it is less than 50 lines of extra code? On 32.000 lines of code, that isn't much?
Music: sounds arranged such that they construct feelings.

Re: Retune of FLAC compression levels

Reply #48
Confirming an observations from @A_Man_Eating_Duck :
New -6 compresses (slightly!) worse than new -5.
But it seems to take shorter time. Which again suggests there is something about the handling of subdivide_tukey(2) (possibly in relation to the new order selection?).

The impact of course depends on material. On the classical music (among the 38 in my signature), -6 is 72 parts per million better. Not much, but at least not worse.
For the heavier material, it is 0.06 percent worse, but that is entirely due to Laibach. Remove that, and it is break even.
The others section makes -6 worse by 0.35 percent, which is quite a lot. Would have been .23 without Kraftwerk, which compresses nearly two percent worse.

I also FOR looped -mb4096 -r7 -l <number> with tukey(5e-1) and with subdivide_tukey(2) to see if -l 12 was just a weirdo thing. It isn't. Orders looped: 4 to 16 (this is CDDA, so I --lax'ed it).
* tukey(5e-1): the retune produces smaller files. All of the orders. (Though for classical music, if I ran the orders all the way to 18 / 19 / 20, it would be reversed.)
* subdivide_tukey(2): the retune produces larger files. All of the orders.

It seems that the retune does not make much compression benefit out of subdivide_tukey(2). The following list is how 1.4.2 benefits from going tukey(5e-1) --> subdivide_tukey(2). Orders 4 to 16:
0.18456%
0.18561%
0.18683%
0.18684%
0.18484%
0.18460%
0.18391%
0.18385%
0.18334%
0.18293%
0.18234%
0.18215%
0.18206%

Same numbers except retune tukey vs retune subdivide_tukey:
0.00116%
0.00107%
0.00102%
0.00099%
0.00097%
0.00095%
0.00093%
0.00092%
0.00091%
0.00091%
0.00091%
0.00090%
0.00089%
Percentages quoted with so many decimals you can read it as "89 parts per million". 

To say it does not get much compression benefit is not to say that it is useless - it seems to save time.

Re: Retune of FLAC compression levels

Reply #49
and adaptive mid side doesn't seem very useful anyway
How is it not very useful?

It gets 90% of the improvement of 'exhaustive' mid-side at 10% of the encoding time.

Adaptive midside adds a chunk of complexity for questionable benefit
I think it is less than 50 lines of extra code? On 32.000 lines of code, that isn't much?

It's only about 50 lines depending on how you count them, but more importantly 3 variables in FLAC__StreamEncoderPrivate, 1 variable in struct CompressionLevels, 1 variable in FLAC__StreamEncoderProtected, and API functions which would have to remain regardless at this point for compatibility. I'd argue that's a lot of state to maintain given how rarely it's used. Flac encodes so quickly that anything but the quickest encode preset should probably use -m IMO. If it's as good as you say then I guess it's worth it just to improve -0, improving compression ratio of the minimum recommended setting cheaply is a good thing.