HydrogenAudio

Lossless Audio Compression => FLAC => Topic started by: ktf on 2023-03-28 14:30:58

Title: Retune of FLAC compression levels
Post by: ktf on 2023-03-28 14:30:58
Hi all,

I'm considering retuning FLACs compression levels to work more like TAKs. Where TAK has p0, p0e, p0m, p1, p1e, p1m etc, FLAC will have its compression levels similarly grouped. levels 0, 1 and 2 will be the fastest decoding, 3, 4 and 5 will be slighly slower decoding, 6, 7 and 8 will be decoding the slowest. In each group, a higher levels gives higher compression and slower encoding. Beyond the grouping, material with a resolution above 48kHz will have presets 6, 7 and 8 go to lpc order 32, and presets 0, 1 and 2 change blocksize from 1152 to 4096, like the other presets.

A more thorough explanation of what exactly changes is here: https://github.com/xiph/flac/pull/576 There are also some links to results of comparisons.

The main reason for this change is that I think there are less 'useless' presets in here, presets are better differentiated.

CDDA input, encoding:
X

CDDA input, decoding:
X

Hi-res material, encoding. Colors are swapped, sorry for that
X

Hi-res material, decoding:
X

I'd like to hear some opinions on this change.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-28 15:35:31
Here's a x64 windows binary with the proposed settings

Also, I didn't notice the decoding graphs lacking an x-axis. The reason is that the differences are so small, the plotting software doesn't put any labels there (on a log axis it only places labels on multiples of 10, with sublabels on x2 and x5). The difference between the left and right side of the plot is only 15% for both 16-bit audio and 24-bit audio, meaning preset -0 decodes about 15% faster than preset -8.
Title: Re: Retune of FLAC compression levels
Post by: Ziengaze on 2023-03-28 19:37:47
A quick test on my CDDA material shows a 26.5% decrease in encoding time and a 0.07% increase in filesize for flac -8 with this patch (https://github.com/xiph/flac/pull/576) applied.
Title: Re: Retune of FLAC compression levels
Post by: A_Man_Eating_Duck on 2023-03-28 19:49:09
Sorry a bit of a newb question but does FLAC need to have 8 different preset levels?

Could it be broken down to a more simple scheme?

Preset Fast = FLAC preset 2
Preset Medium = FLAC preset 5
Preset Best = FLAC  preset 8

or

FLAC preset 0,1 get mapped to Flac preset 2
FLAC preset 3,4 get mapped to Flac preset 5
FLAC preset 6,7 get mapped to Flac preset 8
Title: Re: Retune of FLAC compression levels
Post by: Aleron Ives on 2023-03-28 20:41:19
Changing the number of presets would break old FLAC frontends, which ktf probably doesn't want to do. It's cleaner to rebalance the existing nine levels than to change the number of levels.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-28 21:00:40
Sorry a bit of a newb question but does FLAC need to have 8 different preset levels?
It probably doesn't need to, but it seems most people like some choice. TAK has 15 presets, WavPack has 4 presets and 6 different modes of extra processing (which can be combined in any way), Monkey's audio has 5 presets and OptimFROG has 12. So, FLAC isn't alone in this regard.

A quick test on my CDDA material shows a 26.5% decrease in encoding time and a 0.07% increase in filesize for flac -8 with this patch (https://github.com/xiph/flac/pull/576) applied.
Did you compare against unpatched current git, the latest release (1.4.2) or something else?
Title: Re: Retune of FLAC compression levels
Post by: john33 on 2023-03-28 21:02:57
I have to say that I rather think that this is a solution looking for a problem. The majority of users will simply use the default preset and most won't venture beyond that. For those few enthusiasts who wish to delve into the depths of what is possible, no doubt they enjoy playing with all the various current options. I'm not saying it's a bad idea, I'm just suggesting that it's not necessarily relevant.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-28 21:48:19
Can those who run tests please let us know their CPUs?

Asking because there are known differences between Intel and AMD processors. For example, in ktf's tests (using an 1.3 version), (http://audiograaf.nl/losslesstest/revision%205/Average%20of%20all%20CDDA%20sources.pdf) -3 seems to be the fastest decoding, and that seems to be an AMD specific thing. More differences got known as HA users tested 1.4.x performances.

Short "just a couple of files" testing here, will do more when I am back to my test corpus at my computer (which BTW is now running a re-encode to 1.4.2 on a hard drive which is nearly full - that's when 50 cents of hard drive space is worth a bit more):

* Going full -l32 on high sampling rates:
My first reaction was, this will shock those who use "-8e", I checked one high-resolution file that now takes 4x the time of 1.4.2. That is even if the reduced number of apodization functions speeds up -8el12 to half the time of 1.4.2.

But the thing is, -e still has a mission on some high sampling rates, that is the problem. At least it had on 1.4.2, I'll test this one.
If there is a way to get a more clever semi-exhausting that tries the right thing, then maybe change the meaning of "-e"? (And introduce a "-E" for "YES I have the patience".)

* -0 with stereo decorrelation:
You probably thought over the following scenario, but ... what if somebody runs an on-the-fly encoding with a decoder that can only do dual mono? (I guess they won't use a new flac.exe anyway.) And 16 bit word length (cf. the limitations you put on 32-bit signals) - would anyone want dual mono as a point in itself? If so, they might specifically have chosen a preset that does dual mono.
If - and that's a big "if" - there is a point to have a dual mono preset, then what else than -0? 

* Going full 4096 is probably not a bad idea. Will test. Did test on 1.4.2, (https://hydrogenaud.io/index.php/topic,123025.msg1018512.html#msg1018512) with some slight surprises. One was a penalty for 4096 over 3172.

(* Will test: Old -6 as new -5? I've argued that -6 wasn't much useful except for those who wanted the lightweight decoding of -l 8. Will test.)

* A_Man_Eating_Duck certainly has a point. I would ask how many use anything but -0, -5, -8, -8withsomeadditionaltweaks, but actually Bandcamp uses -6 ... which I've said bad things about. But hey, as long as there are -1/-2/-3/-4/-6/-7 around already, it doesn't hurt to give them some sensible meaning. (If that's a solution looking for a problem, then the problem was set up twenty years ago.)
Title: Re: Retune of FLAC compression levels
Post by: Ziengaze on 2023-03-28 21:51:49
Did you compare against unpatched current git, the latest release (1.4.2) or something else?
unpatched git master 9ee21a0
edit: on a Ryzen 1800
Title: Re: Retune of FLAC compression levels
Post by: A_Man_Eating_Duck on 2023-03-28 22:01:46
Changing the number of presets would break old FLAC frontends, which ktf probably doesn't want to do. It's cleaner to rebalance the existing nine levels than to change the number of levels.
Using the second method wouldn't break presets, it's just mean if preset 1 is used it will actually encode with preset 2 or if preset 6 is used it will encode with preset 8. It also aligns to the graphs as well.

This would mean there are less presets that you need to adjust the tuning for.
Title: Re: Retune of FLAC compression levels
Post by: NetRanger on 2023-03-28 22:59:08
I have to say that I rather think that this is a solution looking for a problem. The majority of users will simply use the default preset and most won't venture beyond that. For those few enthusiasts who wish to delve into the depths of what is possible, no doubt they enjoy playing with all the various current options. I'm not saying it's a bad idea, I'm just suggesting that it's not necessarily relevant.

Have to agree with john on this.

Personally..... All this hunting for rather minimal gains of this and that...... it's a waste of time imo. Things as it is, works just fine.
Title: Re: Retune of FLAC compression levels
Post by: C.R.Helmrich on 2023-03-28 23:40:53
Me too. Though I think preset 2 should be retuned so that, if possible, its encoding speed and compression gain end up somewhere between those of presets 1 and 3.

Chris
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-29 06:38:30
Personally..... All this hunting for rather minimal gains of this and that...... it's a waste of time imo. Things as it is, works just fine.
I don't think ~ 0.3% improvement on average (-8 on hi-res material) with no slowdown is in any way minimal? Also -0 improves more than 1% with no slowdown. Lots of topic here discuss -p and -e, and those usually result in gains far less, at the cost of quite some slowdown.

I agree it might not be particularly necessary, but I don't think it is a waste of time.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-03-29 08:59:09
If the goal is to keep the existing 9 presets, I think I would do something like this:

0-2: Same as existing presets, but change -b to 2304.
6: Same as existing preset but increase -l to 10.

Other presets remain unchanged but for > 48k materials increase preset 7 to -l16 and preset 8 to -l20, and -b8192 for >= 176.4k materials.

-l is in general quite effective in reducing file size but higher values also reduce decoding speed, especially for higher sample rates like DXD.

As for APE, I never think "insane" is useful.
Title: Re: Retune of FLAC compression levels
Post by: forart.eu on 2023-03-29 10:03:07
Hi all.

Instead of retuning presets it would be mutch more helpful to let the - commandline - encoder find the optimal "tune" for each machine, according to user's choices (speed, compression or both).

Hope that inspires !
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-29 10:05:15
For standard resolution, these retunings are not for size - apparently! - so that part of the criticism should be revised ... at least to target the setting that one should think twice before altering, namely the default. The proposed change will make -5 slower (no big deal if you compare new -5 on new hardware to fifteen years ago!), but I would scratch my head for a while over slowing down the default by some forty percent. Also it becomes slower than new -7.
It is also a stretch to re-assign default to "new -4 = old -5", but I think it should be considered. That would also keep the "logic" of these triplets 012, 345, 678, where the highest of each is for those who are willing to wait. Default would be the "middle of the middle".


-0 to -2 serve their very special purposes, namely to enforce fixed predictors.  Whether or not that is actually fastest, there are compatibility/interoperability reasons to keep them thay way.
Also in the IETF draft, ktf&co will recommend that whenever one wants maximum compatibility with decoders that suck, then stick to 1152 or 4096 samples in a block.  So while 2304 seems to improve, there is a case against it - at least for -0.
Going 4096 but keeping the -r will increase the partition size from 147 to 512, whether or not there is any reason to change that ... I see ktf proposes to keep -r3 for -0, but for -1 go to -r5 (a partition becomes 128 samples, that is close to 147).
So given the purpose of -0 to -2, then
-0 for fast encoding in a failsafe way.  (Which is why I question the stereo decorrelation.)
-2 to make the "best" out of fixed predictors.  From the charts it looks like the return to effort is very small, but it does serve that particular purpose.
Title: Re: Retune of FLAC compression levels
Post by: NetRanger on 2023-03-29 10:45:03
Personally..... All this hunting for rather minimal gains of this and that...... it's a waste of time imo. Things as it is, works just fine.
I don't think ~ 0.3% improvement on average (-8 on hi-res material) with no slowdown is in any way minimal? Also -0 improves more than 1% with no slowdown. Lots of topic here discuss -p and -e, and those usually result in gains far less, at the cost of quite some slowdown.

I agree it might not be particularly necessary, but I don't think it is a waste of time.

I've personally never bothered with testing many switches for improving encoding speed and the size of the output. If the encoder/preset work as it should, then it's fine with me. Only used the preset -6 or -8 on my encodes for the last 4-5 years+
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-29 10:54:29
If the encoder/preset work as it should, then it's fine with me. Only used the preset -6 or -8 on my encodes for the last 4-5 years+
That is the reason why the developers set reasonable presets, so you don't have to tweak settings further ;)

(Can I ask why -6? Because it was the default at version 0.x? Not long since I saw someone still making that assumption from memory.)
Title: Re: Retune of FLAC compression levels
Post by: NetRanger on 2023-03-29 14:16:12
If the encoder/preset work as it should, then it's fine with me. Only used the preset -6 or -8 on my encodes for the last 4-5 years+
That is the reason why the developers set reasonable presets, so you don't have to tweak settings further ;)

(Can I ask why -6? Because it was the default at version 0.x? Not long since I saw someone still making that assumption from memory.)

To be honest...... i don't know really why i use -6. Could just be something i set at one time and then i've just kept it. But i would say i use -8 on 95% of my encodes these days.
Title: Re: Retune of FLAC compression levels
Post by: Brand on 2023-03-29 14:47:52
A comparison between the 1.4.2 release from October and the retune version posted here.
~12GB of 44.1k/16bit files with an Intel 10600K on Windows (through Foobar).

version       enc. speed    size
1.4.2 -8       1310x         11,792,799
1.4.2 -7       1677x         11,803,842
retune -8     1794x         11,805,269

I ran every test a couple of times, although encoding speed was quite consistent.
Decoding speeds varied a bit more between runs, but I didn't see any significant difference there between the versions. And I just used Foobar's decoding speed test, with whatever flac version it uses for that.
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-03-29 15:26:47
version       enc. speed    size
1.4.2 -8       1310x         11,792,799
1.4.2 -7       1677x         11,803,842
retune -8     1794x         11,805,269
Similar results here with CD material. I don't think -8 needs speed ups in trade for compression.
Title: Re: Retune of FLAC compression levels
Post by: Markuza97 on 2023-03-29 18:22:57
I would personally go even further.

Map all presets from 0-5 together into single preset. Use preset 5 as base and optimize it as much as you can without sacrificing speed.
Map all presets from 6-8 into single preset. Use preset 8 as base and optimize it as much as you can for higher compression while still keeping sane level encoding speed.

Edit: Before you start screaming at me, have you ever seen anybody using presets other than 5 (default) and 8 (best)?
Title: Re: Retune of FLAC compression levels
Post by: MrRom92 on 2023-03-29 18:42:07
Maybe we could keep the numbered levels as-is but also add new, non-numbered presets in addition to those levels so that anyone interested can use them, and people attached to their old presets for whatever reason won’t miss them.


The deprecated -completelyimpracticalcompression (or whatever it was, something along those lines) comes to mind
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-29 20:06:24
Edit: Before you start screaming at me, have you ever seen anybody using presets other than 5 (default) and 8 (best)?
Bandcamp uses -6.
And I am pretty sure that -0 is in use, for those very particular reasons.
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-03-29 21:55:32
Edit: Before you start screaming at me, have you ever seen anybody using presets other than 5 (default) and 8 (best)?

The difference between presets -0 and -8 are probably about 3% more compression at the cost of 3-5x longer encoding time.  People with older systems might not think the increase in time is worth it.
Title: Re: Retune of FLAC compression levels
Post by: C.R.Helmrich on 2023-03-29 22:35:55
I don't think ~ 0.3% improvement on average (...) with no slowdown is in any way minimal? Also -0 improves more than 1% with no slowdown. Lots of topic here discuss -p and -e, and those usually result in gains far less, at the cost of quite some slowdown.

I agree it might not be particularly necessary, but I don't think it is a waste of time.
Agreed, some of your new presets are quite a bit more efficient (hadn't noticed that earlier in its full extent, sorry about that), but why this zig-zag speed/compression ratio curve for the encoder?

When, as you wrote, all FLAC presets decode at very similar (and insanely fast) speeds, why not design a convex-hull preset curve? e.g., take your retuned presets 0, 1, 3, 6, 7, and 8 (or maybe a bit more compression efficiency for 8 ) and find new presets 2, 4, and 5 which lie along the curve, i.e. between the neighboring presets in both encoding speed and compression ratio?

By doing that change, you wouldn't even have to ask for comments here, I think everyone would accept such a change. Especially since you already managed to achieve both speed and efficiency improvements on some presets with your retuning.

Examples for (mostly) convex-hull preset curves in video coding:

(https://spin-digital.com/wp-content/uploads/2023/02/compression_efficiency_bd-rate_encoders_4K-768x406.)
https://spin-digital.com/wp-content/uploads/2023/02/compression_efficiency_bd-rate_encoders_4K-768x406.

Chris
Title: Re: Retune of FLAC compression levels
Post by: Destroid on 2023-03-29 23:46:27
why this zig-zag speed/compression ratio curve for the encoder?
Yes, it resembles TAK -p#e and -p#m settings, doesn't it?

I suppose I might be first to mention that FLAC 1.42 -0 compresses better than -1 on a lot of material... and faster.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-03-30 04:55:48
Also in the IETF draft, ktf&co will recommend that whenever one wants maximum compatibility with decoders that suck, then stick to 1152 or 4096 samples in a block.  So while 2304 seems to improve, there is a case against it - at least for -0.
The IETF document actually states that 2304 is a common blocksize.
https://www.ietf.org/archive/id/draft-ietf-cellar-flac-05.html#name-blocksize-bits
Are there any real example (name and version of hardware or software decoders) only works with 1152 and 4096 but not 2304?

Here are some previous test regarding the use of different blocksizes on the lower presets:
https://hydrogenaud.io/index.php/topic,123025.msg1018543.html#msg1018543
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-30 08:19:49
@bennetng : you are right, I cannot read. (My tests in #263 above your link, but those would have to be redone now.)

@C.R.Helmrich on convex graphs: decoding complexity is also a point. So 0,1,2 work in a restriction to fixed-only predictors and -2 is not "good" in any other sense. And 6,7,8 go to highest prediction order. So the "relevant" test for convexity is within each triplet: 1/4/7 below the 0&2 midpoint / 3&5 midpoint / 6&8 midpoint.

@Destroid : Interesting? Are you using a spinning drive where the reduced filesize also reduces write time?
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-30 09:15:38
I am not confident that the -l 32 for high resolutions is "ready yet".  For testing I picked nine 96/24 signals, trying to get around a quarter of an hour each file by merging songs from the same source, and while most behave nice and monotone, the exceptions did pull the average quite a lot on this small corpus, where -8 -l <N> ended up smallest at N=19.
Here are the two bad guys with the total:

.Craters trackTTP: Tx20 EPAll 145 minutes
duration%10.35%12.24%100.00%
-8e52.43%52.51%53.75%
-8 -l 856.26%57.80%55.18%
-8 -l 954.74%56.14%54.71%
-8 -l 1053.88%55.26%54.44%
-8 -l 1153.42%54.62%54.26%
-8 -l 1253.32%54.53%54.21%
-8 -l 1353.23%54.31%54.15%
-8 -l 1453.15%54.05%54.08%
-8 -l 1553.13%53.92%54.08%
-8 -l 1653.09%53.78%54.04%
-8 -l 1753.09%53.62%54.04%
-8 -l 1853.10%53.54%54.03%
-8 -l 1953.09%53.45%54.02%
-8 -l 2053.11%53.40%54.02%
-8 -l 2153.11%53.45%54.03%
-8 -l 2253.12%53.40%54.03%
-8 -l 2353.14%53.52%54.05%
-8 -l 2453.15%53.57%54.06%
-8 -l 2553.16%53.66%54.07%
-8 -l 2653.19%53.74%54.09%
-8 -l 2753.22%53.76%54.10%
-8 -l 2853.25%53.85%54.12%
-8 -l 2953.27%53.84%54.12%
-8 -l 3053.28%53.90%54.13%
-8 -l 3153.29%53.92%54.13%
-8 -l 3253.30%53.95%54.14%
The two tracks:
* Admittedly, the Tx20 EP ( https://teaparty.com/tx20 ) was one signal I knew has had some surprises. But I did not remember precisely what surprises except that they made big impact. After testing, going to https://hydrogenaud.io/index.php/topic,120158.msg1014183.html#msg1014183 and checking, it did improve well going to -l 13 and 14 ...
* Craters: Batagaika. Instrumental and distorted, picked from https://doomedandstoned.bandcamp.com/album/doomed-stoned-the-instrumentalists-vol-i for a quite arbitrary reason: I was looking for something around 15 minutes, and this is 15:01. (Oh, in addition to being 96/24 and available for free in case anyone is interested.)

Apart from that:
* Merged to one track: 15 minutes jazz from the now-defunct 2L test bench
* Merged to one track: 19 minues classical from the same
* Merged to one track: 16 minutes from Kimiko Ishizaka's three free Bach piano recordings, https://kimikoishizaka.bandcamp.com
* Kayo Dot: The Second Operation (Lunar Water), 13 minutes (selected for being longest on album, the album was included in the tests at the HA link above)
* Cult of Luna: Lights on the Hill, 15 minutes (also selected for being longest on album - this is however not the same Cult of Luna album as I used in the above HA link)
* Hooded Menace: Elysium of Dripping Death (selected for being longest 96/24 on this free compilation https://relapsesampler.bandcamp.com/album/relapse-sampler-2015 )
* The Stargazer's Assistant: Birth of Decay, 18 minutes (the only 96/24 on this free compilation: https://houseofmythology.bandcamp.com/album/watch-pray-five-years-of-studious-decrepitude . Also, it isn't metal, which the corpus has enough of - it is more dark ambient / electronic.)
* And a tenth signal actually, but 88.2/24 and only three minutes in total so I didn't bother to take it out of the total when I wrote "nine" 96/24 above. Anal Trump: That Makes Me Smart! (https://analtrump.bandcamp.com/album/that-makes-me-smart). "You Suffer", anyone? Also weird results.


Small corpus of not-exactly-chartbusters, but maybe tells a story.

Also, observation: some signals "benefit from" -l <odd number>, some from -l <even number>. Hm.
Title: Re: Retune of FLAC compression levels
Post by: regor on 2023-03-30 10:40:56
Quote
FLAC will have its compression levels similarly grouped. levels 0, 1 and 2 will be the fastest decoding, 3, 4 and 5 will be slighly slower decoding, 6, 7 and 8 will be decoding the slowest.
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation. I don't really see the point for changing that, which would mean having to change most webs referencing FLAC usage. (even if its just a 15% difference, unless I didn't understand it right)

If compression and decoding time may be improved, great.

But I agree with some of the comments here than making a 0.3% compression or small encoding time improvement makes no sense when we are at 2023 and -official- FLAC remains single threaded (and lets not talk about GPU acceleration).  It has been mostly the same for years.... and while it's always great to optimize current code or presets, it still has almost zero impact on real usage.

Well people can put their time on whatever they want, but it would clearly be better spent on things which bring real improvements to the table.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-03-30 12:36:03
I am not confident that the -l 32 for high resolutions is "ready yet".
Seems to coincide with some of my previous tests about the diminished (if not negative) return of -l32.

Higher -l for hi-res may work better with subdivide_tukey(6) or subdivide_tukey(5) + three simple windows of your choice, and perhaps with -r8 and -b8192.
Title: Re: Retune of FLAC compression levels
Post by: VEG on 2023-03-30 13:24:21
I would keep just -0 and -8, where everything between -1 and -8 is treated as -8, this way I wouldn't need to recompress every FLAC file I download from bandcamp.com =)
Title: Re: Retune of FLAC compression levels
Post by: Destroid on 2023-03-30 13:53:38
@Destroid : Interesting? Are you using a spinning drive where the reduced filesize also reduces write time?
I was reporting on the 32-bit compiles on a RAM Drive. Sorry to have not specified before. Anyway, I use a script to bench all modes (16-bit, stereo, 44Khz material).

Porcus is the real benchmarker IMO :)

Edit: typo, and Win32 specific plus (if you want GCC VS. Intel Win32 stuff [just for giggles], please let me PM some numbers to know).
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-30 13:54:37
version       enc. speed    size
1.4.2 -8       1310x         11,792,799
1.4.2 -7       1677x         11,803,842
retune -8     1794x         11,805,269
Similar results here with CD material. I don't think -8 needs speed ups in trade for compression.
For my corpus, compression did in fact improve instead of worsen. I'll reconsider.

Maybe we could keep the numbered levels as-is but also add new, non-numbered presets in addition to those levels so that anyone interested can use them, and people attached to their old presets for whatever reason won’t miss them.
Most people are asking for less presets, not more  :))

Agreed, some of your new presets are quite a bit more efficient (hadn't noticed that earlier in its full extent, sorry about that), but why this zig-zag speed/compression ratio curve for the encoder?
This is how TAK works. Each group of 3 presets (each 'zig' if you will) belongs to a certain decoder speed. Only, the difference in decoding speed are really small. That is the case with TAK as well though.

I could consider really optimizing for encoding speed versus compression (convex hull), ignoring decoder speed. The differences there are only very small after all.

I suppose I might be first to mention that FLAC 1.42 -0 compresses better than -1 on a lot of material... and faster.
That is something that should have been fixed (https://github.com/xiph/flac/commit/f44d5967fd6779c079a60366c36dfa57b94d296f) here with 1.4.0. Are you sure that you didn't see that behaviour with older versions of FLAC? If you indeed have a lot of material where FLAC 1.4.2 has -1 produce larger files than -0, I'd like to know what kind of audio that is. I might be able to tune things it little better with that knowledge.

I am not confident that the -l 32 for high resolutions is "ready yet".
With which compile did you test? I did make a slight change in the order guessing algorithm for this proposal.

Quote
FLAC will have its compression levels similarly grouped. levels 0, 1 and 2 will be the fastest decoding, 3, 4 and 5 will be slighly slower decoding, 6, 7 and 8 will be decoding the slowest.
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation.
So.... you have a problem with me making decoding of certain presets *faster* because now an old rule of thumb is now a little bit less true? This proposal speeds up decoding of presets 0, 1 and 2 by about 8% (assuming MD5 is checked, otherwise it is more) and slows down presets 6, 7 and 8 for audio with a samplerate of > 48kHz by about 2%. Is that really that much of a dealbreaker? There was already a difference between the fastest and slowest presets of about 8% previously, this is now doubled, but more in the direction of going faster than going slower.

Quote
Well people can put their time on whatever they want, but it would clearly be better spent on things which bring real improvements to the table.
On average I spend 5 hours a week working on FLAC, unpaid. Are you saying I really need to put all that time into "real improvements", and I cannot once in a while do something fun? If everyone would have only done things that were "necessary" we would still be stuck in the stone age.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-30 14:32:46
One of the supposed idiosyncrasies of FLAC was that decoding time is the aprox. the same no matter the encoding preset.

Your proposal breaks that assumption on purpose, which has been the standard since FLAC creation.

No, this is wrong, even if the "aprox." would now amount to a doubled difference due to -0 to -2 speeding up. Even if you don't notice the improvements - because still they decode approximately equally fast for practical purposes - then don't knock them.

The thing is, -0 to -2 do decode slightly faster already and always have - because they are encoded using only the fixed predictors. And -0 and -3 decode faster because you don't have to convert joint stereo to dual mono. Up to -6, the prediction order of up to 8 (meaning a sample is calculated from up to the eight previous ones) is less computationally demanding than -7 and -8, where the order is at most 12. Even using up to 32 doesn't make that much of a difference.

The reasons you have "always" heard that decoding time does not depend on preset, are:
* When reading from a spinning drive, reading a larger -0 encoded file takes more time. The practical diffences could go the other way back in the day.
* In any case, "any" FLAC decodes so fast you would hardly notice the difference - especially compared to the "symmetric" codecs.
* For the symmetric codecs (Monkey's, WavPack before -x, OptimFrog at the same time), compression time and decompression time would follow each other in a nearly 1:1 manner. Indeed, Monkey's still takes more time to decompress than to compress.
In comparison, FLAC depends so little upon encoding parameters that it was fair to say "the same".
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-30 15:21:38
@ktf : I used the build you attached in this thread.

Also, testing on a few files (the "*j*.wav" in my signature), new -8 seems to produce slightly larger files than new -8r6 -A subdivide_tukey(3), which isn't so strange - but the latter also gives larger files than 1.4.2 at -8. That must be due to the prediction order selection then? Or did you also tweak the selection of Rice partitioning and/or exponent?
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-30 15:51:11
That must be due to the prediction order selection then?
Yes. I guess I need to take a closer look at that.
Title: Re: Retune of FLAC compression levels
Post by: Destroid on 2023-03-30 16:26:57
@ktf regarding -0 vs. -1 on Win32:

The -0 compression over -1 advantage is not huge, yet, it is noticeable. This is with CD 44khz stereo material. All of the git 1.42 git builds have identical results. :shrug:

I am at a loss for what to explain.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-03-30 18:02:12
I am at a loss for what to explain.
My tests show -1 compressing better than -0 at pretty much all audio material I have. So, what kind of music do you see -0 having an advantage over -1? I'd like to investigate.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-30 23:08:26
CDDA results, likely confirming that the change in LPC order selection does matter.  Which is not to say it is a bad thing, given the speed-up.
Also -r8 seems to not be worth it.
Corpus: 38 CDs from my signature. Not reliably timed, for that I have to leave the computer untouched and run repeats.


Baseline is current -8.  Run with 1.4.2 (x64), takes ten minutes and a half.  About the same time is the first of these:

* -8r6 -A subdivide_tukey(4).  Not the first that I tested, but I put it here because it takes pretty much the same time as 1.4.2 at -8. 
And compresses 0.02% worse.

* retune -8r6  -A subdivide_tukey(3). That is, the same parameters as 1.4.2 -8, so the changes are in the LPC order selection algorithm.
Considerably faster: eight minutes.
Every file is bigger than 1.4.2 -8, but only slightly so: none hit the 0.1 percent difference mark. The classical music increases by 0.042 percent, the heavier rock by 0.026 percent, the "other" in between.

* old -7
Maybe half a minute faster than retune -8r6  -A subdivide_tukey(3).  Bigger files, except some classical music.  The classical section is 5 parts per million bigger.
That means it is about the speed of retune -8r5 -A subdivide_tukey(3), see the final comparison.


Then the impact of "-r":
Above I did the retune -8r6 -A subdivide_tukey(3) (= old -8 options).  Changing the "r" to 7 or 5 costs/saves half a minute (atop eight minutes).  Impact:
* The "r" to 7: ten parts per million.  One album as high as 0.011 percent. Eight CDs (six classical and two metal) ended up with exactly the same number of bytes.
* The "r" to 5:  62 parts per million.  One album (a different one!) up by 0.080 percent.

One final comparison:
Since retune -8r5 -A subdivide_tukey(3) is about the same time as 1.4.2 at -7, what is the difference?
retune -8r5 -A subdivide_tukey(3) produces slightly smaller files: 0.019 percent.
The difference is about zero for classical music; those files increase by 4 ppm (the impact of the -r is about 10 ppm, and the rest makes for -6).
Driving the difference in favor of the retune, are Kraftwerk and Arman Van Helden, they are electronic driven and are the ones that benefit most from increasing the "-r". You'd expect then that they would lose from the -r5 etc. setting? No, they benefit even more from the subdivide_tukey(3) and whatever other changes you made.
Title: Re: Retune of FLAC compression levels
Post by: Destroid on 2023-03-31 08:54:31
Please disregard my prior comment on -0 vs. -1 as my own misinterpretation of the file ratio.

If there were any trends noticed it might have been files processed with LossyWAV (and the differences were very tiny and probably a byproduct of using -b 512).
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-03-31 16:39:17
I loosely tested fixed predictors. Don't know if I can trust the timing differences, but fwiw: Block size 2304 looks good in this build too.

But I guess there must be some "executive decision on principles" under all this, and maybe there is little use in trying all sorts of timing tweaks until those are sorted out.

So @ktf , could you provide some input on the following - including "put off that test until next build is posted here" if applicable:

* -0/-1/-2: is it "1152 or 4096, nothing else"? (If so I won't do more 2304s on more computers.) And it is clear that nobody needs -0 to be dual mono? (If so ... no need to test dual mono, it performs bad.)

(* -3: I kinda feel less worried over joint stereo here. If some device is crippled enough to need dual mono, then would it even use LPC frames?)

* -4 vs -5 vs -6: You are proposing to change -5. 1.4.2 at -6 isn't good IMHO, so proposing that as a new default does call for testing I think.
But someone in the system might have made a decision what is more "sacred", if anything: The default setting, or default being "-5"?
Arguably, if one wants to change the default setting one might as well let new default be "new -4" if that is more convenient. The only ones who should care are those who encode with an explicit "-5", and if they give explicit parameters they could read manuals.

* -678: 1.4.2 was a slowdown (the double precision fix), so to ask outright: does -8 have to become faster? Do you need to stop the complaints about 1.4 being slow? If that is a must, then it looks good - but if on the other hand you do not want complaints that "1.4.2 compressed better!!" (from people who don't calculate ratios and see that the change is not much to whine about), then more tweaking could be done.

* The -l32 on high resolution ... I don't think it is ready. Not as high as 32. Could do more testing, but if you are already reassessing the order selection algorithm, then I won't bother to do more testing yet. (An aside, your explanation at the bottom here (https://hydrogenaud.io/index.php/topic,123025.msg1024145.html#msg1024145) with link to code seems to explain a jump between 16 and 17 - but the jump showed up between 15 and 16.)
Also the concern about "-8e", but that is just as much over "-8p", those seem to be quite comparable in time consumption.
A note on why prediction order matters: In the short 96/24 test above, -8pl17 and -8el16 took the same time, the latter improving 0.17% over the former, which in turn improved 0.06% over -8l17.
Long rambling but question was: are you already working on the order selection code?

* Is a "-9" off the table? Could even be undocumented in the short help. I was thinking:
"-9 (experimental): same as -8 for standard resolutions. For higher resolutions, employs settings likely to change in future versions."
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-01 14:41:30
The 20 albums, 18,6 GB, 24-96/88.2 i once used for RG testing to -8p multithreaded.

1.42
4.52 min 19.713.856.336 Bytes

flac-retune
3.51 min 19.747.528.116 Bytes

The average is pretty much the same but 1 album differs very much. 728MB against 772MB. The retune one is the big one.
Edit: did read one wrong, sorry...
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-02 00:06:01
I might have found something relevant (after confusing myself over the fact that the retune defaults to using subdivide_tukey(2)).
I ran both 1.4.2 and the retune with -mr6 -l12 -A tukey(5e-1) and <same but -A subdivide_tukey(2) and (3)>.
And then flac -a and grepped the analysis on order=12, order=11 etc.

Found: going up to subdivide_tukey(2) (or higher) makes the retune build "avoid" the highest LPC orders. Numbers are line counts for
tukey(5e-1) resp subdivide(2) resp (3)
Order 12:
852336 resp. 857902 resp. 734070 for 1.4.2.
829521 resp. 539591 resp. 330633 for the retune, quite a reduction.
For orders 10, 11, 12 combined:
1693524 resp. 1705403 resp. 1690185 for 1.4.2
1662904 resp. 1269961 resp. 1293984 for the retune - again, quite a reduction.

I could calculate averages, but I am anyway not sure what to make out of this - are those top predictors really significant? (I mean, "some of them are", but are only the insignificant ones dropped?)
But there is something going on. Not unlikely it is a good thing for speed ... ?
Title: Re: Retune of FLAC compression levels
Post by: A_Man_Eating_Duck on 2023-04-02 08:41:21
Just to make is clear what I was suggesting with only focusing on 3 presets and mapping the presets up to either 2,5 or 8 was to make is easier on ktf and other FLAC devs to refine the presets.

I've done a test comparing the tuned version against 1.4.2 stock. This test is the average bitrate over 16919 tracks. The encoding time for the tuned version at preset 8 compared to stock preset 8 was about an hour quicker, less noticeable difference as the presets lowered.

PresetTunedStock
01012 kbps1037kbps
11010 kbps1013 kbps
21009 kbps1011 kbps
3960 kbps986 kbps
4957 kbps959 kbps
5955 kbps957 kbps
6957 kbps955 kbps
7955 kbps953 kbps
8953 kbps952 kbps
I'm double checking the results for tuned presets 4-7 to make sure i didn't make a mistake.
Title: Re: Retune of FLAC compression levels
Post by: cid42 on 2023-04-02 13:51:04
The main gripe I have with the current state of the presets is that -1 and -4 use -M which only work in a streaming context and adaptive mid side doesn't seem very useful anyway. Is -M still relevant in some context? If not I propose we rip it out of the codebase and have -M be an alias for -m. Adaptive midside adds a chunk of complexity for questionable benefit, IMO the juice is not worth the squeeze.

If -M stays maybe just get rid of it from the presets.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-02 13:57:43
and adaptive mid side doesn't seem very useful anyway
How is it not very useful?

It gets 90% of the improvement of 'exhaustive' mid-side at 10% of the encoding time.

Adaptive midside adds a chunk of complexity for questionable benefit
I think it is less than 50 lines of extra code? On 32.000 lines of code, that isn't much?
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-02 15:13:46
Confirming an observations from @A_Man_Eating_Duck :
New -6 compresses (slightly!) worse than new -5.
But it seems to take shorter time. Which again suggests there is something about the handling of subdivide_tukey(2) (possibly in relation to the new order selection?).

The impact of course depends on material. On the classical music (among the 38 in my signature), -6 is 72 parts per million better. Not much, but at least not worse.
For the heavier material, it is 0.06 percent worse, but that is entirely due to Laibach. Remove that, and it is break even.
The others section makes -6 worse by 0.35 percent, which is quite a lot. Would have been .23 without Kraftwerk, which compresses nearly two percent worse.

I also FOR looped -mb4096 -r7 -l <number> with tukey(5e-1) and with subdivide_tukey(2) to see if -l 12 was just a weirdo thing. It isn't. Orders looped: 4 to 16 (this is CDDA, so I --lax'ed it).
* tukey(5e-1): the retune produces smaller files. All of the orders. (Though for classical music, if I ran the orders all the way to 18 / 19 / 20, it would be reversed.)
* subdivide_tukey(2): the retune produces larger files. All of the orders.

It seems that the retune does not make much compression benefit out of subdivide_tukey(2). The following list is how 1.4.2 benefits from going tukey(5e-1) --> subdivide_tukey(2). Orders 4 to 16:
0.18456%
0.18561%
0.18683%
0.18684%
0.18484%
0.18460%
0.18391%
0.18385%
0.18334%
0.18293%
0.18234%
0.18215%
0.18206%

Same numbers except retune tukey vs retune subdivide_tukey:
0.00116%
0.00107%
0.00102%
0.00099%
0.00097%
0.00095%
0.00093%
0.00092%
0.00091%
0.00091%
0.00091%
0.00090%
0.00089%
Percentages quoted with so many decimals you can read it as "89 parts per million". 

To say it does not get much compression benefit is not to say that it is useless - it seems to save time.
Title: Re: Retune of FLAC compression levels
Post by: cid42 on 2023-04-02 15:27:49
and adaptive mid side doesn't seem very useful anyway
How is it not very useful?

It gets 90% of the improvement of 'exhaustive' mid-side at 10% of the encoding time.

Adaptive midside adds a chunk of complexity for questionable benefit
I think it is less than 50 lines of extra code? On 32.000 lines of code, that isn't much?

It's only about 50 lines depending on how you count them, but more importantly 3 variables in FLAC__StreamEncoderPrivate, 1 variable in struct CompressionLevels, 1 variable in FLAC__StreamEncoderProtected, and API functions which would have to remain regardless at this point for compatibility. I'd argue that's a lot of state to maintain given how rarely it's used. Flac encodes so quickly that anything but the quickest encode preset should probably use -m IMO. If it's as good as you say then I guess it's worth it just to improve -0, improving compression ratio of the minimum recommended setting cheaply is a good thing.
Title: Re: Retune of FLAC compression levels
Post by: C.R.Helmrich on 2023-04-02 20:40:20
Are the preset details (i.e., individual synonymous options) listed on https://xiph.org/flac/documentation_tools_flac.html still the same in FLAC's current Git revision? If so, there seems to be quite some benefit in having that automatic adaptive mid-side selection. Especially with presets 1 vs. 2, the latter seems totally useless now: preset 1 gives almost the exact same compression ratio (Man Earing Duck confirmed this above) but much faster. Again, I suggest giving preset 2 a bit more compression efficiency, e.g., by increasing the prediction order to, say, "-l 4". I'm sure the decoding speed will then still scale with the presets.

And, cid42, do you honestly consider adding 50 lines of code and a handful of extra variables questionable in a project of this scale? Even if it's useful only for preset 1, I'd say it's worth it. Though I agree that, for all higher presets, "-m" should be the one to use.

Chris

Title: Re: Retune of FLAC compression levels
Post by: cid42 on 2023-04-02 22:11:19
...
And, cid42, do you honestly consider adding 50 lines of code and a handful of extra variables questionable in a project of this scale? Even if it's useful only for preset 1, I'd say it's worth it. Though I agree that, for all higher presets, "-m" should be the one to use.

Chris
I do assuming the functionality was pointless. Apparently it's not.
Title: Re: Retune of FLAC compression levels
Post by: A_Man_Eating_Duck on 2023-04-03 05:08:40
Just to make is clear what I was suggesting with only focusing on 3 presets and mapping the presets up to either 2,5 or 8 was to make is easier on ktf and other FLAC devs to refine the presets.

I've done a test comparing the tuned version against 1.4.2 stock. This test is the average bitrate over 16919 tracks. The encoding time for the tuned version at preset 8 compared to stock preset 8 was about an hour quicker, less noticeable difference as the presets lowered.

PresetTunedStock
01012 kbps1037kbps
11010 kbps1013 kbps
21009 kbps1011 kbps
3960 kbps986 kbps
4957 kbps959 kbps
5955 kbps957 kbps
6957 kbps955 kbps
7955 kbps953 kbps
8953 kbps952 kbps
Just confirming that these results are correct.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-03 10:30:13
I ran encoding timings overnight. CDDA, the 38 CD images. Representative totals, are a major WTF:
284 seconds for retune at -6 yes!
360 seconds for retune at -5. The differences to -6 vary between genres from 17 to 23 percent, not much.
359 seconds for 1.4.2 at -5. (Edit a typo, it is 359.) Faster than retune on classical (5%), slower on metal (4%).
424 seconds for 1.4.2 at -6. Time difference to -5 is ten percent on metal, twenty on classical and "other".

I should have included retune -4 ...

The above are measured as follows:
Run 1.4.2 -5 through classical music 5 times. Ditched the first for reasons to explain later.
Same setting through heavier music 5 times. Ditched the first.
Same setting through "other" music 5 times. Ditched the first.
Next setting.
Repeat the above. And again. And again. So, sixteen runs.
For each exe/setting, take median of the 16 classical timings. Median of the 16 heavier music. Etc.
Add median+median+median to get the times above.

Why I ditched the first? The following is "interesting" on this cooling-constrained computer:
* First run on heavier music - immediately after classical music - takes ~10 percent less time than the next four. Both builds. Like 90 or 93 seconds instead of 102/103.
* First run on the "others" section - immediately following the Sodom live album at bitrate > 1100 - takes ~10 percent more time, at the -5 settings. For 1.4.2 -6, the impact is halved. For retune -6 it is gone.
The impact of the first run on the classical music, after the "others", isn't much. Except for the retune -6 when it was immediately after a -5 run, which is lighter - then times are ten percent down.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-03 14:06:47
I'll take all comments into consideration and will come up with a new proposal. I think I'll drop one preset (having 8 by synonymous with 7) and try to tune to a curve as suggested here (https://hydrogenaud.io/index.php/topic,123889.msg1024291.html#msg1024291). My aim is to make sure every next preset will compress better (and slower) for pretty much all material.

I'll also take a good look at the order guessing code.
Title: Re: Retune of FLAC compression levels
Post by: Ziengaze on 2023-04-03 16:40:57
Would the retuned compression levels warrant a version bump?
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-04 09:48:51
Firing off even more loose thinking-alouds:

I think I'll drop one preset (having 8 by synonymous with 7)
If you are to drop one up there, then alias together -6 and -7 I think. A user who actively selects -6 or -7 likely intends to encode faster than -8.

and try to tune to a curve as suggested here (https://hydrogenaud.io/index.php/topic,123889.msg1024291.html#msg1024291).
I'd argue that you merely want convexity on a graph where the first axis is (unlogged!) time. We are waiting seconds to save bytes, not log-seconds. (If size locally around some CPU time t is approximately C - (log t)^2, what then? Good or bad?)

My aim is to make sure every next preset will compress better (and slower) for pretty much all material.
Not saying that is wrong, but it defeats the idea in your first post. (Especially concerning -2?)
Like, as you said about TAK - it has a -p0 range, a -p1 range etc, and -p2 is faster than -p1m (but more complex and heavier decoding ... in case anyone needs to care by now). Also WavPack with its four complexity levels with -x0 to -x6 on top.

It has been argued that very few will use anything but -0/default/-8 (and above), but let's just not think number of users, but what they actually select it for - presuming those are good reasons. There could be two reasons to select -0 (speed and compatibility, and you shouldn't sacrifice the latter), but apart from that?
-2: is there any other reason to select -2 than to make the most compression out of the fixed predictors? -2 could very well be -1mer6/7/8 (there are a few selected signals where you would partition up significantly) even if that makes it slower than -3.
-3 and -4? Provided default stays "-5" (whatever that is going to be alias for) - is there any reason to select them other than for speed? (You could alias them together ... and well, if -5 = default speeds up, then you could alias -4 into -5 for that matter.)
-6 and -7 ... encode faster than -8. I argued to keep -6 even if it "is bad" (https://hydrogenaud.io/index.php/topic,123025.msg1016398.html#msg1016398) but in the (012), (345), (678) pattern it isn't much use in having it at "-6", hence that suggestion to let default be "new -4".
I said loose thinking-aloud, that means I don't have to stay consistent.
So therefore ...

Thinking aloud on a departure from the triplets:
-0 to 2: fixed predictors. In case that is needed, don't break anything.
-3: as light-decoding as possible with LPC. Down to -l 5 or whatever. -4: same decoding complexity, more juice in encoding
-5: sane default. -6: decoding as -5, more juice in encoding. (Bandcamp uses -6.)
-7 and -8: heavier.
Pro: closer to today, and you are already contemplating to reduce the "heavier" presets to two.
Contra: -3 and -4 are quite useless then?
Contra-contra: are -3 and -4 doing any harm?

I'll also take a good look at the order guessing code.
It did smart things for "-6" purposes :-)
At the risk of opening a can of worms here: it is known that a lot of high resolution material fools it big time.
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-04 15:07:21
-0 (speed and compatibility, and you shouldn't sacrifice the latter), but apart from that?
Over the years i've read audiophiles use -0 for better sound and space is not a concern. They like the better tagging against the even better sounding wav but this has no tagging standard :)
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-04 22:20:07
Over the years i've read audiophiles use -0 for better sound and space is not a concern. They like the better tagging against the even better sounding wav but this has no tagging standard :)
dBpoweramp has offered an "uncompressed" FLAC for that, *cough*, that market segment. Didn't check how - or if I did, I have forgotten it - but forcing verbatim subframes would probably do the job?
Except that spoon had to explain to users when the .flac files were actually significantly smaller. Decoded HDCD would be saved in 24-bit FLAC files with wasted bits. *points at your signature*
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-05 01:38:18
The syntax was like this for uncompressed: --disable-constant-subframes --disable-fixed-subframes --max-lpc-order=0 -b 4608
Not sure it took much attention. At this time also i did read real golden ears switched to aif.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-07 09:32:25
A bit late to the party since @ktf already has a revision in the works, but:

1: preset speeds I got to borrow the Ryzen-equipped laptop I also used in the 1.4.x tests. Not only is -6 faster than -5, but also than -4. And -7 is faster than -5. I didn't even notice it was that dramatic, so I went back to the Intel and ran that again, several runs each setting. It turns out on both computers that the timing of the first run is "always" contaminated by what happened immediately before - including, what musical genre was processed. Not by much, but noticeable.
Speeds relative to -5 on
Intel then Ryzen
-3: 72 resp 70
-4: 80 resp 79
-5: 100. Should be added, it is like 15 percent faster than old -6 from which it takes its nominal settings (and makes 0.01% bigger files).
-6: 77 resp 74
-7: 88 resp 85
... all CDDA.


2: -2 and variations. -2b<varying blocksizes> -r <varying>.  TL;DR: The proposal of -b4096 and -r 6 looks good indeed. 
(At this -r level, -b4096 is the best overall, and there is very little benefit in -r7.)

Write-up for CDDA. Note, these are variations on -2 and not on -0 ...
* Block size 4096 produces smallest files on total whenever the max partition order "-r" is at 3 or higher. That is among all subset-compliant multiples of 512 and of 576.
By each of the three genre sections, it is at worst second-best at -r4 and above.
(Details: Classical music marginally improves from -b4608, and 2304 does improve the "other" section. I don't know what makes 2304 beat the neighbours 2048 and 2560 at pretty much everything when we get to -r3 or above.)
* Increasing the -r of course improves, but how much?
It is the "other" genres section that benefits the most - and especially if you want to go for -b4096, then the higher -r gets -b4096 within 0.012 percent of -b2304.
At 4096, moving the four steps up  -r3 to -r7 still improve the "others" section by 0.10, 0.05, 0.03 and finally 0.009 percent.  The totals: 0.05, 0.02, 0.01 and ... 0.003.
There is quite some diveristy in the "other" section, so what makes up the 2304 vs 4096?
At -r6, the jazz benefits from 4096, the techno from 2304 and the pop/rock diverges.  Albums with largest impacts are 0.18 percent either way (Miles Davis (mono) and then Kraftwerk).  Going -7 does virtually nothing to these impacts.  -5 to -6 helps a little. 
* Timings, only for a few block sizes: For-r5 and above it is clearly better than the smaller block sizes. At -r6 the difference to 2048/2304 is above 5%.

High definition tested afterwards - not that it is the most important thing really, but "for sanity". At some -r3 or -r4 and up, 4096 and 4608 are the best. Checked 8192 afterwards, it makes bigger files as partitions are twice as large - you could of course consider -b8192 -r7 to compensate for that, but I got only 0.04 percent out of it. Timings matter surprisingly little going from -r4 to -r6 (-r7 for -b8192) and varying among the reasonable block sizes.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-07 12:03:26
The syntax was like this for uncompressed: --disable-constant-subframes --disable-fixed-subframes --max-lpc-order=0 -b 4608
Not sure it took much attention. At this time also i did read real golden ears switched to aif.
If the goal is to make the flac file as big as possible then a very small blocksize like 256 should be ideal.

I tried foo_benchmark on foobar2000 1.6.16 with a ~4GB aif vs wav test using RAM drive and got ~30000x decoding speed for wav but ~13100x for aif, may have something to do with endianness, but would like to see someone with a big endian processor to do such a test.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-07 14:26:14
The syntax was like this for uncompressed: --disable-constant-subframes --disable-fixed-subframes --max-lpc-order=0 -b 4608
Not sure it took much attention. At this time also i did read real golden ears switched to aif.
If the goal is to make the flac file as big as possible then a very small blocksize like 256 should be ideal.

The idea wasn't to maximize overhead, but to store "uncompressed" for those anti-vaxxers audiophools who think compression ruins sound. Also they can distinguish out FAT from NTFS or Seagate from WD by listening, and maybe if they try careful enough, A-law from LPCM, but only if one is WAVE and the other is AIF(C) :))
On a more serious note, this was introduced when WAVE tagging was a largely unsupported. And yes AIFF was a thing in that market segment. I think it was B&W who started a limited audio "store" with selected recordings, and offered AIFF for ... demand reasons I guess.

Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-07 15:49:16
The syntax was like this for uncompressed: --disable-constant-subframes --disable-fixed-subframes --max-lpc-order=0 -b 4608
Not sure it took much attention. At this time also i did read real golden ears switched to aif.
If the goal is to make the flac file as big as possible then a very small blocksize like 256 should be ideal.

The idea wasn't to maximize overhead, but to store "uncompressed" for those anti-vaxxers audiophools who think compression ruins sound. Also they can distinguish out FAT from NTFS or Seagate from WD by listening, and maybe if they try careful enough, A-law from LPCM, but only if one is WAVE and the other is AIF(C) :))
On a more serious note, this was introduced when WAVE tagging was a largely unsupported. And yes AIFF was a thing in that market segment. I think it was B&W who started a limited audio "store" with selected recordings, and offered AIFF for ... demand reasons I guess.



Here's the thing, the harder the audio is compressed losslessly, the harder it becomes for the device to decode it.

Audiophiles usually do not use a computer but a media server or some other receiver which decodes their files, and these devices arent the 10th as capable as our computers, due to which, sometimes, they introduce stupid artifacts into the signal as a result of overload, be it jitter or be it anything else cuz the device has to have sufficient ram to store the file and a good processor to decode it in realtime.

that's the reason why audiophiles say that uncompressed lossless sounds better, because its making their setup act better due to the less load on resources... that is also why CDs with SHM sound better to them, because the transport reads it better in real time, rather than doing multiple scans to extract the info with some other material with less transparency.

To other people, Don't start a debate here pls, am not gonna pull needles in haystacks and prove some pretty obvious things that happen due to physics. After decoding everything is the same, because you do it in a PC, as in, decode the audio completely to uncompressed and then play it. Rather than using a real time streaming device that has to manage decoding an extremely compressed signal real time especially with its arm processors and 200mb ram whatever, maybe less... so buffer length and stuff, everything factors into the output they're getting. That's why those media servers or receivers work better with uncompressed material and why audiophiles like them.

I used to be of these opinions as well that these guys are nuts cuz lossless is lossless.. but when I factored the hardware and their setups into the equation, things started making sense... on a computer everything is decoded completely then played, and even if you try realtime, the processors and ddr5 ram is extremely fast and way over the top to be bothered with such things.. that's why it sounds the same to us... cuz it is.
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-07 15:56:37
That Seagate WD difference as well... WD drives are faster at 7200rpm compared to Seagate at 5400 (their common versions blue and barracuda), so the same real-time resources thing comes back again... the faster drive would obviously feed the decoder faster and consistently, and if the data rates being output by the drive are in sync with the capabilities of their server/decoder, then yes it would work better in that system

For listening, it all boils down to real-time differences. How fast and how well can the hardware cope with the material being supplied. The smoother that chain is, the better the result would be. Ofcourse that also requires the existence of a point-of-no-return concept, i.e. after you have received sufficient bandwidth and processing power to handle whatever the material needs, optimising it further would be a complete waste of resources and yet if you're hearing differences then yes, you just became an unfortunate victim of audiofoolery.
Title: Re: Retune of FLAC compression levels
Post by: Bogozo on 2023-04-07 18:20:41
Don't start a debate here pls
Don't write nonsense pls. Even 8 MHz is enough to decode FLAC in real time - https://hydrogenaud.io/index.php/topic,82125.0.html
As for audiophiles, they are just mentally ill and need medical attention.

WD drives are faster at 7200rpm compared to Seagate at 5400
Wow! Captain Obvious is right here!
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-07 18:44:32
Rather than using a real time streaming device that has to manage decoding an extremely compressed signal real time especially with its arm processors and 200mb ram whatever, maybe less... so buffer length and stuff, everything factors into the output they're getting. That's why those media servers or receivers work better with uncompressed material and why audiophiles like them.
FLAC can be decoded without much issue on the ESP8266 which has 112 kilobyte of RAM.

See for example this not specifically optimized port of libFLAC: https://github.com/earlephilhower/ESP8266Audio/blob/master/README.md It says:
Quote
On the order of 30KB heap and minimal stack required as-is.

Considering the resource load: having 'uncompressed' FLAC will indeed lower the (already very low) pressure on CPU, but not on memory and disk/network usage.
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-04-07 20:10:13
That Seagate WD difference as well... WD drives are faster at 7200rpm compared to Seagate at 5400 (their common versions blue and barracuda), so the same real-time resources thing comes back again... the faster drive would obviously feed the decoder faster and consistently, and if the data rates being output by the drive are in sync with the capabilities of their server/decoder, then yes it would work better in that system

RPMs alone aren't really the best way to judge HDD performance.  A 5400rpm drive with higher density platters and more heads could be faster than a 7200rpm drive. 
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-07 20:21:45
That Seagate WD difference as well... WD drives are faster at 7200rpm compared to Seagate at 5400 (their common versions blue and barracuda), so the same real-time resources thing comes back again... the faster drive would obviously feed the decoder faster and consistently, and if the data rates being output by the drive are in sync with the capabilities of their server/decoder, then yes it would work better in that system

RPMs alone aren't really the best way to judge HDD performance.  A 5400rpm drive with higher density platters and more heads could be faster than a 7200rpm drive. 

That was off the top of my head, there are many more things like buffer storage, the drivers, the interface in use etc. etc.

not the point I was making,
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-07 20:28:59
Rather than using a real time streaming device that has to manage decoding an extremely compressed signal real time especially with its arm processors and 200mb ram whatever, maybe less... so buffer length and stuff, everything factors into the output they're getting. That's why those media servers or receivers work better with uncompressed material and why audiophiles like them.
FLAC can be decoded without much issue on the ESP8266 which has 112 kilobyte of RAM.

See for example this not specifically optimized port of libFLAC: https://github.com/earlephilhower/ESP8266Audio/blob/master/README.md It says:
Quote
On the order of 30KB heap and minimal stack required as-is.

Considering the resource load: having 'uncompressed' FLAC will indeed lower the (already very low) pressure on CPU, but not on memory and disk/network usage.

The resource load is what I am talking about, not the storage or delivery method. The amount of processing needed to decode that file into a PCM signal would significantly lower for an uncompressed flac than compressed. I agree that ESP can decode it, am not saying it can't, am talking about the realtime performance on low power systems.

It's the same argument as older CD Transports, the more properly built they were, the more stable they ran leading to less realtime jitter and etc. Ripping a cd isn't the same cuz the laser goes through that same track multiple times for error correction and stuff to get an accurate rip, that's all out of the picture in realtime.

Same thing about flac, you can decode it completely on a PC which is why you're gonna get a 1:1 lossless signal, but when you try to do it realtime, performance would matter, especially when we're trying to achieve 0ms or something in latency.

This isn't a debate or something, it's just a logical perspective that actually makes sense about why these audiophiles, especially the senior ones with older equipment, hear differences.
Title: Re: Retune of FLAC compression levels
Post by: Apesbrain on 2023-04-07 21:21:01
This isn't a debate or something, it's just a logical perspective that actually makes sense about why these audiophiles, especially the senior ones with older equipment, hear differences.
It has never been demonstrated that these "differences" are real.  What people "hear" is affected by their biases.  I'd say that "senior audiophiles" are particularly susceptible to these biases, but that would be a cheap shot as everyone is affected.  How "old" their playback equipment is (or how expensive) does not matter; nor does the rotational speed of their hard drives.

"Listening is 90 percent mental. The other half is how much your gear costs."

If you claim that FLAC sounds different at "compression 0", then put forth experimental results that prove your hypothesis.  Using foobar2000's "ABX Comparator" component, a test of this sort is trivial to design. Otherwise, you are in violation of this site's 'Terms of Service":
https://hydrogenaud.io/index.php/topic,3974.html#post_tos8

Meanwhile, read this listening test result where a majority of senior "golden ears" concluded MP3 sounds better than FLAC:
http://archimago.blogspot.com/2013/02/high-bitrate-mp3-internet-blind-test_3422.html
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-08 12:12:58
On a more serious note, this was introduced when WAVE tagging was a largely unsupported. And yes AIFF was a thing in that market segment. I think it was B&W who started a limited audio "store" with selected recordings, and offered AIFF for ... demand reasons I guess.
Of course senior audiophiles use 80-bit aif too :D

Back to the topic, while I can't explain the -b2304 phenomenon, apart from genres, I also pay some attention on the production practices of the audio files. For example this file

https://hydrogenaud.io/index.php/topic,123655.msg1022192.html#msg1022192

After adding some reverb but keeping the 16/96 dual mono format, the optimal blocksize is not too different from many other 96k contents. For example, many audiophile recordings are produced without extensive use of close miking. Of course there are also synthesized music like movie soundtracks and new age music with heavy emphasis on perceived ambience.

PS: I trimmed the file to 15 seconds otherwise I got the "maximum file size allowed is 0KB" error.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-08 13:02:55
Of course senior audiophiles use 80-bit aif too :D
Is that a thing? https://en.wikipedia.org/wiki/Audio_Interchange_File_Format#Common_compression_types
(Sample rate is stored as extended precision, but is there any consumer playback chain that handles non-integer rates without resampling?)
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-08 13:23:00
Of course senior audiophiles use 80-bit aif too :D
Is that a thing? https://en.wikipedia.org/wiki/Audio_Interchange_File_Format#Common_compression_types
(Sample rate is stored as extended precision, but is there any consumer playback chain that handles non-integer rates without resampling?)
Of course it is a joke with that smiley. What I really think about aif is that many earlier DAWs are Mac based and the accompanied hardware are not cheap. Not only the recording interfaces, but also the processing hardware in additional to the computer itself.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-09 14:24:58
Concerning "-0" candidates: when will I/O constraints kick in, really?

I made three 4 GB FLAC files to recompress. Copy from Powershell takes like, 40 seconds (SSD to same SSD, NTFS, Win11). Using the retune build to recompress (again to the same SSD):
-0 --no-mid-side (that has an implicit -r3) takes around 170 seconds
-0 -r7 takes around 170 seconds. Maybe a percent difference. -
-1 takes 190 seconds, so brute-forcing the joint stereo / dual mono choice is enough to make time increase.
And -r7 is enough to make a difference from -2. Like, -2r7 takes 14% more time than -2r3.

I guess flac.exe must use more time to write to the files in segments (of I think 10 MB?) than built-in copy does, so ... ?
Title: Re: Retune of FLAC compression levels
Post by: Kartoffelbrei on 2023-04-10 14:37:30
It is really irritating to see people discuss the possibility of decoding audio beeing a relevant computational bottleneck in any way while i can render 3D images in realtime without stutter with a machine that is 12 years old by now.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-10 15:58:01
Sure that Rockbox users would be happy to carry your old or new computer around for portable audio playback ... ?

(I'll tell you what is really irritating about this thread: over again catching myself at
* not being able to start at "0" when counting FLAC presets in a diagram
* reading colour switched, but glancing at the wrong diagram still
* forgetting for the Nth time about that log scale
... and realizing I got to either wrong conclusions or to being surprised at what should be "clear" from the initial post ...
That's annoying. ktf must have quite some patience ignoring all the pebkac.)
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-11 09:24:51
I really don't understand why people here start attacking a logical perspective with borderline mockery or hurtful language, what I have stated is clearly not my own experience or perspective, but is a logical problem that can happen in certain circumstances, but somehow y'all are relating the examples to sheer innuendo and generating mockery through that.

Even the dev didn't respond in such a trash way, I wonder why y'all are doing that. It's a fact that on computers the whole processing happens in the ram and happens so quickly that you cannot get any differences, but when you do that on some cheap arm-based processor with android 2.0 running, things become different... I don't know how to put my point further.

Add a trash harddrive to that android thingy, and boy are you gonna have trouble listening... that's not innuendo but a legitimate problem that actually exists. I wasn't taking the side of audiophiles but rather saying that some of them could actually have such a situation where the hardware is better supported to read uncompressed files easily and smoothly.

A simple demonstration of this would be to prepare any digital file and fool around with the block size, then stream it, both the files would be lossless or have the same content if they are lossy, but the one with smaller blocks would stream better and hence sound better. However, upon a full decode, they will null.

It's not rocket science. Why do you think some codecs handle this better than others? Streaming is literally the same thing as playing from a CD or real time hard disk if the internet speed is way higher than the material's rate.

The file optimised for streaming would obviously work better than the one with larger sizes. Same logic goes for interlacing images or videos too... post some uninterlaced images on a website and load it on a slow computer, you'll see nothing until the full image loads or you'll see only the parts that have completely loaded. Replace it with an interlaced image and you'll start seeing the image partially whilst the rest of it is loading, and in the end you'll get the same image as the uninterlaced one.

Ofcourse these particular techniques aren't relevant to audio, but they sure are relevant to content delivery and bandwidth and therefore help prove my point perfectly, and logically.

I personally have experienced this in-car systems or old mobile phones as well. Higher quality or bigger files struggle to play, they could be choppy or laggy or 100 other things you name it that can be caused by stuttered loading of the file.

It's not innuendo or a 1-in-a-billion type problem. It applies to real-world and happens frequently. TV's as well, the Android/Smart ones, like my Samsung from a few years ago, cries out whenever I try playing some high bandwidth media on it or a huge VBR aac, it works the best with CBR files of moderate length, like songs.

That's because the TV wasn't prepared to do this, it was made for people to causally play their mp3s or their handycam videos, not use as a proper audio system, but people still do and hence pay the price with reduced fidelity... which gets blamed to the format cuz no one will accept that their cheap equipment could be at fault..

See it from a pragmatic perspective people, rather than an all-science laboratory perspective... because that works only in labs, not in the real world. People do crazy things and form stupid conclusions about various things because they didn't know how to use them properly...

Tell me if either of the things I mentioned are textbook-type examples that can't happen in the real world... or any innuendo that I wrote.

I still gotta take my breakfast, so pardon me if there are some grammatical errors or repetitions. No intents to offend anyone.
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-11 11:02:09
I guess flac.exe must use more time to write to the files in segments (of I think 10 MB?) than built-in copy does, so ... ?
I really don't understand how you got to that conclusion. It is not that FLAC preset 0 does nothing, it still calculates an MD5sum and the bitwriter code also takes quite a bit of time. Could very well be that is the best explanation for the difference between simply copying and recompressing.
Title: Re: Retune of FLAC compression levels
Post by: Apesbrain on 2023-04-11 13:55:30
I wonder why y'all are doing that.
You stepped over the line when you started talking about "senior audiophiles hearing differences".  If the file decodes, it will sound the same regardless of FLAC compression level.  That kind of woo is typical on other audio forums, but prohibited here.

Long ago your point about CPU power being a bottleneck might have been valid ("Android 2.0" has been gone for 13 years), but modern ARM and x86 chips have no issue with FLAC at any compression.
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-11 15:25:43
A simple demonstration of this would be to prepare any digital file and fool around with the block size, then stream it, both the files would be lossless or have the same content if they are lossy, but the one with smaller blocks would stream better and hence sound better. However, upon a full decode, they will null.
Exactly this sounds like the typical senior audiophile drivel.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-11 17:48:27
I guess flac.exe must use more time to write to the files in segments (of I think 10 MB?) than built-in copy does, so ... ?
I really don't understand how you got to that conclusion.
I might be completely wrong of course, but: When an application passes 10 MiB to the write queue 400 times over, is that really going to be handled in the same overall time as when the operating system calls (the write part of) a copy command?

It doesn't look that way. On this SSD I got three .flac files of around 4 GB each.
Initially encoded with fixed predictors, and all the following are done with the retune build, some "sync d:" commands in between for good measure:
151 seconds: Measure-Command {flac -d -ss --force-rf64-format  Image*.flac} Deleting the flac files and sync and then:
147 seconds: Measure-Command {flac -0r3 --no-mid-side -ss Image*.rf64}
143 seconds: Measure-Command {flac -0fr7 --no-mid-side -ss Image*.rf64} this with -f rather than deleting the .flac files
122 seconds: Measure-Command {flac -0fr7 -ss Image*.rf64} (-M is good here!)
186 seconds for flac to flac: Measure-Command {flac -0fr7 -ss Image*.flac} <------ r7
187 seconds for flac to flac: Measure-Command {flac -0fr3 -ss Image*.flac} <------ r3
109 seconds: Measure-Command {flac -t  Image*.flac}
107 seconds: Measure-Command {flac -ss -t Image*.flac} this time using -ss just for sanity check
Measure-Command {copy Image*.flac copies}, which copies the flac files to a "copies" directory on the same SDD, takes ... <checks again> not too consistent duration, but the "forty" seconds isn't the worst estimate.
Measure-Command {copy Image*.RF64 copies} takes 55 to 57 seconds.

So ... what to make out of this, when -r7 doesn't hurt compression time? (It does when the baseline is -2 ...)
Hunch: minor bottleneck at every write?
Other suggestions?

Are you actually reaching the point where nobody who will use the encoded data (rather than discarding it and recording the timing figure) will need the extra speed? Encoding speed, I mean?
If so, that is ... awesome. But sure employ an ancient CPU with a fast SSD to crack that hypothesis  ;)
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-11 18:14:57
So ... what to make out of this, when -r7 doesn't hurt compression time? (It does when the baseline is -2 ...)
That MD5summing and bitreading and/or bitwriting takes 70 seconds? flac -t takes about 70 seconds longer than copying, and reencoding (which is decoding and then encoding again) takes about 70 seconds longer than flac -t? Makes perfect sense to me?
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-04-11 19:51:50
@darkalex

I don't think the type of person who typically uses lossless for audio quality is using ancient or inadequate hardware.  A media server or receiver may be a 10th as powerful, but the hardware and OS/firmware are designed specifically for the task.

My 23 year old K6-III with 256M RAM plays my FLAC files compressed with "-m -b 4096 -l 12 -r 7 -p -A subdivide_tukey(13/10e-1);welch;hann;flattop" just fine with a USB 1.1 flash drive.  I also have a 13 year old Android 2.1 device that doesn't break a sweat playing these files.

I think the type of people who think uncompressed FLACs sound better have decent hardware, and they're probably also the people buying "audiophile grade" routers and RAM sticks.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-11 20:53:33
The last time I experienced stuttering/dropouts (when I believed I have correctly configured my system) was a Pentium II 400, 192MB RAM PC, playing APE files with "Extra High" compression mode in foobar2000, with SSRC turned on.

And yes, the OS/firmware does matter, just an example, and read the reply right below it:
https://www.audiosciencereview.com/forum/index.php?threads/about-flac.14760/post-460345

Speaking of Samsung, the old smart TV in my home has a habit of suddenly setting the volume to max when turned on. This happens every 2 or 3 months or so. No real damage happened as the TV is not connected to any high power audio system, it just uses the built-in speakers, but the annoyance is real, especially when someone turn on the TV during midnight.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-11 21:10:10
So ... what to make out of this, when -r7 doesn't hurt compression time? (It does when the baseline is -2 ...)
That MD5summing and bitreading and/or bitwriting takes 70 seconds? flac -t takes about 70 seconds longer than copying, and reencoding (which is decoding and then encoding again) takes about 70 seconds longer than flac -t? Makes perfect sense to me?
Not depending on -r, is that really reasonable? If so, give -0 the -r6 treatment because why not?
But as --no-md5-sum does make a difference, I just tested that: we are not "hitting hard" a constraint, and faster bitwriting is possible indeed. (Oops pardon my ignorance: "bitwriting" is the process of writing the bit stream (in the format's order of bits and bytes, including endianness) to RAM before sending it off to file?)
But OTOH, from numbers @cid42 posted at https://github.com/xiph/flac/issues/585 it seems that buffering does matter.


Thinking aloud (and again at the risk of invoking the last line of Reply #76), here is a "possible" but likely "not uncontroversial" retuning principle: retune "even number" presets and simply drop (= alias together) the odd ones.
-0 for "as fast as possible" (within suitable constraints, including: with MD5).
-1: aliased together with -0. Admittedly this makes not so much sense if -0 must stay dual mono for compatibility reasons.
-2: make the smallest out of fixed predictors (within subset, and likely sticking to -b 4096 unless also elsewhere adopting a proposal of -b8192 for say rates > 144kHz)
-4 and -3 and -5 aliased together with -4: default. Fast. -l limited to 8 to be sure not to break anything.
Because -4 and -5 will now be the same, traditionalists who want default to be -5 are still satisfied.
-6, and -7 aliased together with -6: reasonably fast, but allowing -l 12 (so not selected by default)
-8 is -8, can be tuned but is for those who think they want more than -7.
Now you can consider the following two heresies:
--FAST (capitalized) for -0 --no-md5-sum (I read from the IETF draft that you agree to accept "0" if the MD5 sum is not known, and it leaves room for the following interpretation: it is perfectly OK not to calculate it, for then you don't know it)
--BEST (capitalized) for where I was thinking aloud of a "-9" to avoid the nasty surprises. Say if for a high-resolution file the algorithm proposes a prediction order of 18, then this setting can try 16 and 20 without going full -e on it.

Completely different from {0 1 2} {3 4 5} {6 7 8}, but ... possibly worth thinking over before rejecting.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-11 21:46:58
I really don't understand why people here start attacking a logical perspective with borderline mockery or hurtful language
Maybe you have a point, but your perspective is not "logical" and you shouldn't confuse it by logic. That jittery problem you hint at is resolved by what we in everyday speech call "buffering".
Digital dropouts easily get so ugly that you don't need to be, ahem, "senior audiophile" to catch it. Indeed, glossing over such network issues that you describe - making for audibly graceful failure - that is more of an engineering art.

A simple demonstration of this would be to prepare any digital file and fool around with the block size, then stream it, both the files would be lossless or have the same content if they are lossy, but the one with smaller blocks would stream better and hence sound better. However, upon a full decode, they will null.
Here you presume graceful failure - that is, sound that is not perfect but so good you need to concentrate to hear the imperfections.
Sure you can interpolate over very short holes that way, but lossless codecs easily have block sizes of 0.09 seconds and up.

Ironically, ffplay still chokes at small FLAC blocks (under 128 samples) (https://hydrogenaud.io/index.php/topic,121478.msg1019912.html#msg1019912). It isn't because 127 samples in a block is inherently bad (although it isn't as good as you think), it is because ... well, bad code. Sure Slim Devices had trouble with 4608. Why trouble with 4608 when 4096 plays fine? Not because "4096" is an upper bound on fidelity, but because ... well, bad code.


It's not innuendo or a 1-in-a-billion type problem. It applies to real-world and happens frequently. TV's as well, the Android/Smart ones, like my Samsung from a few years ago, cries out whenever I try playing some high bandwidth media on it or a huge VBR aac, it works the best with CBR files of moderate length, like songs.
Bad decoders refusing VBR is an old and known problem, but don't confuse that with "golden ears" falling for placebo.
And don't confuse that false equivalence by anything "logical".

Old Android? Android was released in 2008. Have a look at https://hydrogenaud.io/index.php/topic,123374 .



PS @Replica9000 :
my FLAC files compressed with "-m -b 4096 -l 12 -r 7 -p -A subdivide_tukey(13/10e-1);welch;hann;flattop"
That doesn't make for much of an extra decoding load. Playback-wise it is like -8r7. "-m" will do joint stereo optimization (but even -2 does that). -b4096 is default. -l 12 is what you get in -7 or -8. Then -r7 means that it might change an encoding&decoding parameter every 32 samples (compared to every 64 samples for standard -7 or -8), and "might" means it only does so in the frames it thinks it is worth it (it means you have to store twice as many of that parameter, so ...). Everything from "-p" and on will only brute-force check several predictor vectors (including, how many bits precision they have) and shouldn't matter for decoding workload.
Several aspects of FLAC work that way: instead of nesting up successive rounds of number-crunching that have to be "undone one by one", it "tries several light ones" and picks the one that gives smaller files and throws away the rest. If it tries a thousand different ways upon encoding a block, then that takes more time - but the decoder doesn't know it tried so many and it doesn't care. You made 9 worse attempts? Oh you made 999 worse attempts? Decoder doesn't know, decoder doesn't care.
Title: Re: Retune of FLAC compression levels
Post by: arkhh on 2023-04-12 09:59:20
Why don't leave open the pull request? Is the idea abandoned, or you want another re-tune of the presets?
https://github.com/xiph/flac/pull/576
Title: Re: Retune of FLAC compression levels
Post by: ktf on 2023-04-12 13:20:41
See here: https://hydrogenaud.io/index.php/topic,123889.msg1024557.html#msg1024557

I'll take all comments into consideration and will come up with a new proposal.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-12 14:07:37
I suspect this is what happens when git-ignorants (like myself) see a PR "closed" with no comment as to whether it means "closing this particular one, which won't happen, in order to propose something new" or "abandoning the whole idea".

Two words then would have saved you more words now ;-)
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-12 19:29:01
I am an AMD Ryzen user and have noticed that certain software when compiled with optimizations for AMD, result in huge performance jumps on my PC.

I came across this today:

https://www.amd.com/en/developer/aocc.html which I believe is the official Ryzen optimised compiler by AMD.

I came across this CLANG 16 binary by Netranger, and it was a day and night difference in performance from the rareware libraries I was using to this day. 24x speed with Rareware x64, whilst 36x with CLANG16 x64 on 24 bit 96khz material. Blew me away.

Hence my question about a FLAC binary that's optimized for AMD Ryzen. It would be a huge help.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-12 20:09:59
I do agree that this should - at least ideally - be part of this discussion. We (or at least I) have been doing some head-scratching over the performance of flac level "-3" in ktf's lossless test - apparently that is an AMD thing. (But official compile ran on AMD. Differences across CPUs ... but I guess with the same build - posted at https://hydrogenaud.io/index.php/topic,122508.msg1024512.html#msg1024512 )


Anyway:
At least before deciding on re-assigning the presets, performance tests across platforms should be welcomed ... so, can you stay tuned for when the next version arrives? (Are you able to compile?)

You could of course try a few builds already, but ktf is tweaking one behind-the-scenes algorithm that does affect speed, namely which prediction order (i.e. "how long history") the encoder should use. Anyway, there were quite a few builds posted in https://hydrogenaud.io/index.php/topic,123234.0.html
Various posts where users tested them against each other on different CPUs, including here: https://hydrogenaud.io/index.php/topic,123025.0.html
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-12 20:26:24
OK, tried some flac v1.4.2 encodes on an old Nokia Lumia 520 smartphone with LineageOS and foobar2000 mobile. The three trials are all non-buffered and read from storage, arranged in different order just in case if test order does matter. Also, the phone was rebooted after each set of trial to avoid caching.

X

Tested album, 2 discs combined into a single image:
https://www.discogs.com/release/2836867-Andrew-Lloyd-Webber-The-Phantom-Of-The-Opera
Code: [Select]
trial 1
0b128    92.283x  
0b4096  142.455x
0       140.347x
0r7     136.920x
0b2304  145.963x

trial 2
0b2304  118.903x
0r7     132.194x
0       127.061x
0b4096  129.846x
0b128    92.663x

trial 3
0r7     125.965x
0b2304  140.878x
0b128    92.064x
0b4096  137.545x
0       138.527x

file sizes:
0b128   618463524
0       582240567
0r7     581839033
0b2304  580637454
0b4096  580333909

I also noticed on desktop both my old i3-4160 and the newer i3-12100 decode AAC much faster than opus with similar bitrate, would like to see if it is the case with this smartphone.

Code: [Select]
trial 1
AAC-CBR256    84.851x
AAC-VBR249    86.096x
Opus-CBR257   30.964x
Opus-VBR258   30.909x

trial 2
AAC-VBR249    85.572x
Opus-VBR258   30.650x
AAC-CBR256    86.318x
Opus-CBR257   30.856x

For those who think the bitrates are too high or too low, or thinking about Opus' mandatory 48kHz treatment I think they can do their own benchmark with what they what to test.

I have some other benchmarks on this smartphone as well, but with full RAM buffering and much shorter duration, so different test conditions:
https://hydrogenaud.io/index.php/topic,123025.msg1016437.html#msg1016437
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-12 20:48:05
I do agree that this should - at least ideally - be part of this discussion. We (or at least I) have been doing some head-scratching over the performance of flac level "-3" in ktf's lossless test - apparently that is an AMD thing. (But official compile ran on AMD. Differences across CPUs ... but I guess with the same build - posted at https://hydrogenaud.io/index.php/topic,122508.msg1024512.html#msg1024512 )


Anyway:
At least before deciding on re-assigning the presets, performance tests across platforms should be welcomed ... so, can you stay tuned for when the next version arrives? (Are you able to compile?)

You could of course try a few builds already, but ktf is tweaking one behind-the-scenes algorithm that does affect speed, namely which prediction order (i.e. "how long history") the encoder should use. Anyway, there were quite a few builds posted in https://hydrogenaud.io/index.php/topic,123234.0.html
Various posts where users tested them against each other on different CPUs, including here: https://hydrogenaud.io/index.php/topic,123025.0.html


That's the thing I'm talking about, the guys were talking about AOCC (that compiler from AMD) which wasn't supporting Zen4 yet, however, with a newer update released later last year, AOCC now supports Zen4

Hence, a developer with these resources can indeed build a compile that should be optimised for AMD Ryzen

Thankfully I got the same results as you, I was surprised to see that the rareware-x64 compile was performing so poorly on ryzen, which is probably why the CLANG compile by Net Ranger blew me away when I ran it today. Indeed porcus, the results you're getting are in the same ratio for my system as well, the CLANG16 compile was 1.5 times faster than the x64 version I procured from Rarewares. There was absolutely no change in compression except the huge reduction in encoding time. from 23-24x to straight 37-38x on 24 bit 96khz files.

Even this seem slow but compared to the past results, it gives me hope that a properly optimized build for AMD is indeed possible which could further boost the performance of FLAC.

The new release is definitely worth waiting for, I'll indeed stick around for it, but in the meantime, could anyone here please build a 64 bit AMD-optimised version for FLAC 1.4.2? That would be a huge help.
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-12 21:06:35
by the way, geniuses, with senior audiophiles, I wasn't referring to people with ears made up of gold... I was talking about people who are senior in age, the 50-60 year old crowd which isn't as computer savvy as us and are possibly using their 2002 marantz receiver because it's dac sounds warmer to them.. pun intended.. equipment like that is not made with intel processors smh... those Motorola or other Chinese chips could indeed mess up if you give them an input or a task too complex. The same goes for car receivers which are manufactured even today, except the extreme top of the line, none of them are using the latest and greatest in processing power or their operating system. A few lines of unoptimised code in their decoding mechanism and there you have a stuttering player that cannot decode a 256mb flac file without choking thrice every minute.. I have seen this happen, literally in my brand new car or in my friend's cars... just connecting them with AirPlay through a lightning cable sometimes makes the devices hang.. which is a fact.

Maybe start using a pragmatic lens and view the reality rather than throwing slurs and bookish know-how at me. No one is siding with anti-vaxxer style audiophiles that use gold-plated ram, but rather with actual human beings who seem to be stuck with inferiorly manufactured equipment. Those equipment would still be used by these people irrespective of what you tell them because they love the sound that the device is creating from other mediums and they'd rather not replace their beloved DAC just because they cannot get it to play some huge file properly. Same goes for car receivers, the airplay may very well not work in my car, but it is integrated really well with all the sensors present inside, due to which, no matter what anyone tells me, I'm not gonna change that device just because it performs poor in one area. There are other things to consider as well. That's what a practical view is. Tell me which part of this does not strike true in the real world. Rather than slurring or mocking a genuine perspective, maybe develop some patience and humility. Certain devices like these which are integrated have various other limitations as well, for instance, only supporting SATA1.0 hard drivers or the older IDE hard drives. Try doing benchmarks on such trash equipment. Seeking a file would crash you out in seconds. Yet these machines remain heavy in use because barring the particular problematic aspect, they perform reliably and excellently in all other facets.

No one is saying that your work is trash or that the audiophiles that behave like karens are right, rather all I'm saying is treat some of them with respect because even they have genuine points at times. Going all psycho on them is neither a pragmatic nor a humble way to treat anyone.

All of the above is not some fairytale or highly specific text book style example that has no relevance to the real world, but is literally a bunch of incidents that I keep striking with every now and then.. which is why some points do seem valid to me, and by now, hopefully to you as well.

I'm done posting on this particular facet, irrespective of whatever I just said about humility, I know that talking this further would only escalate it to hostile levels, which is clearly not my goal here. So I'm done talking about this topic.

Peace
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-12 21:27:08
I came across this today:

https://www.amd.com/en/developer/aocc.html which I believe is the official Ryzen optimised compiler by AMD.

I came across this CLANG 16 binary by Netranger, and it was a day and night difference in performance from the rareware libraries I was using to this day. 24x speed with Rareware x64, whilst 36x with CLANG16 x64 on 24 bit 96khz material. Blew me away.

Hence my question about a FLAC binary that's optimized for AMD Ryzen. It would be a huge help.

could anyone here please build a 64 bit AMD-optimised version for FLAC 1.4.2? That would be a huge help.

@Wombat , does any of your compiles qualify?



PS:

by the way, geniuses, with senior audiophiles, I wasn't referring to people with ears made up of gold... I was talking about people who are senior in age, the 50-60 year old crowd which isn't as computer savvy as us and are possibly using their 2002 marantz receiver
That would be my gear. DAC/amp/speakers, I think "2002" is spot-on!
Handling 96kHz/24, but 32kHz needs to be resampled.
Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-12 21:56:13
My GCC compile of 1.42 in the thread you already linked to is still faster as a recent Clang 16 compile of mine on my Ryzen Zen 3 5900x.
After haswell the additional CPU features doesn't seem to do much for the flac code.
I may build one for Zen 4 later if wanted.
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-04-12 22:32:28
On my Ryzen 5850U, building with AOCC 4.0 is the same performance as regular Clang 16.  Building with -march=znver3 or the generic x86-64-v3 doesn't make a difference either.  For 44.1/16 audio, GCC has about a 4 second advantage with my 1.2GB file over Clang with ASM optimizations disabled, and about a 4 second disadvantage compared to Clang when enabled.



Title: Re: Retune of FLAC compression levels
Post by: Wombat on 2023-04-12 22:41:41
This x86-64-v3 option seems even cleaner as the haswell one for a wider range of CPUs, interesting. I later may add a GCC and Clang 1.42 version with that.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-12 23:12:14
has about a 4 second advantage
about a 4 second disadvantage compared to Clang when enabled.
Second ... per what? (Did you mean percent?)
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-04-13 00:41:07
has about a 4 second advantage
about a 4 second disadvantage compared to Clang when enabled.
Second ... per what? (Did you mean percent?)

With this specific file (44.1/16 - 1h 43m), I get these times on average (+/- ~0.5s) when encoding with -8p

Clang/AOCC: 48s
Clang (w/o ASM): 112s
GCC (w/ASM): 52s
GCC (w/o ASM) 44s
Title: Re: Retune of FLAC compression levels
Post by: darkalex on 2023-04-13 02:25:00
On my Ryzen 5850U, building with AOCC 4.0 is the same performance as regular Clang 16.  Building with -march=znver3 or the generic x86-64-v3 doesn't make a difference either.  For 44.1/16 audio, GCC has about a 4 second advantage with my 1.2GB file over Clang with ASM optimizations disabled, and about a 4 second disadvantage compared to Clang when enabled.

Could you please share this AOCC version? It's the official AMD compiler, I am keen on testing it.

Thank you very much.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-13 08:09:45
by the way, geniuses, with senior audiophiles, I wasn't referring to people with ears made up of gold... I was talking about people who are senior in age, the 50-60 year old crowd which isn't as computer savvy as us and are possibly using their 2002 marantz receiver because it's dac sounds warmer to them.. pun intended.. equipment like that is not made with intel processors smh... those Motorola or other Chinese chips could indeed mess up if you give them an input or a task too complex. The same goes for car receivers which are manufactured even today, except the extreme top of the line, none of them are using the latest and greatest in processing power or their operating system. A few lines of unoptimised code in their decoding mechanism and there you have a stuttering player that cannot decode a 256mb flac file without choking thrice every minute.. I have seen this happen, literally in my brand new car or in my friend's cars... just connecting them with AirPlay through a lightning cable sometimes makes the devices hang.. which is a fact.

Maybe start using a pragmatic lens and view the reality rather than throwing slurs and bookish know-how at me. No one is siding with anti-vaxxer style audiophiles that use gold-plated ram, but rather with actual human beings who seem to be stuck with inferiorly manufactured equipment. Those equipment would still be used by these people irrespective of what you tell them because they love the sound that the device is creating from other mediums and they'd rather not replace their beloved DAC just because they cannot get it to play some huge file properly. Same goes for car receivers, the airplay may very well not work in my car, but it is integrated really well with all the sensors present inside, due to which, no matter what anyone tells me, I'm not gonna change that device just because it performs poor in one area. There are other things to consider as well. That's what a practical view is. Tell me which part of this does not strike true in the real world. Rather than slurring or mocking a genuine perspective, maybe develop some patience and humility. Certain devices like these which are integrated have various other limitations as well, for instance, only supporting SATA1.0 hard drivers or the older IDE hard drives. Try doing benchmarks on such trash equipment. Seeking a file would crash you out in seconds. Yet these machines remain heavy in use because barring the particular problematic aspect, they perform reliably and excellently in all other facets.

No one is saying that your work is trash or that the audiophiles that behave like karens are right, rather all I'm saying is treat some of them with respect because even they have genuine points at times. Going all psycho on them is neither a pragmatic nor a humble way to treat anyone.

All of the above is not some fairytale or highly specific text book style example that has no relevance to the real world, but is literally a bunch of incidents that I keep striking with every now and then.. which is why some points do seem valid to me, and by now, hopefully to you as well.

I'm done posting on this particular facet, irrespective of whatever I just said about humility, I know that talking this further would only escalate it to hostile levels, which is clearly not my goal here. So I'm done talking about this topic.

Peace
If you have to mention IDE harddrive, the earlier ones using PIO are more like under 1GB, and my first Ultra DMA IDE harddrive was a 6.4GB IBM one. They are just too small for anything lossless, even >192kbps lossy was a luxury. In that era I just burn CDR for big files, with a SCSI PlexWriter using caddy. All of these products were released before the born of flac (pre-2000).

The last IDE HDD I had was a 120GB one and the first SATA HDD I had was a 160GB one, both are Seagate 7200rpm. In practical usage regarding lossless audio playback I couldn't find any difference and they definitely didn't have any issue when running benchmark and the benchmark results were comparable to what other online reviewers (e.g. Tom's hardware) got. The bad news is the SATA one started to get bad sector after 3 years of use, right after warranty period, yet the 120GB one lasted for like ten years, or until I bought a newer motherboard that no longer has any IDE connector.

The real issue of these harddrives is when they get really fragmented and when they are nearly full. In these cases they have high chance to choke. The whole thing has nothing to do with the transfer interface speed except when doing benchmark with sequential transfer backed by the drive's RAM buffer. Even the current 4TB HDD I have only has a sequential transfer of 190MB/sec or so, far below SATA3's slow (600MB/sec) interface speed. In actual use HDD generally can't constantly maintain maximum sequential speed so one can expect choking when doing things like editing uncompressed HD video, but those are skyrocket bandwidth compared to audio, even for things like 768kHz PCM or DSD1024.
 
And if you got unexpected freeze or crash it is not because the hardware is slow, it is not like Bill Gate's demo PC was intentionally made crappy to trigger a BSOD, it is just compatibility issues and bugs.
https://youtu.be/IW7Rqwwth84

In mid-late 90s when I fiddled with 3D Studio MAX, with the same hardware (Pentium 133, 80MB EDO RAM, 3DLabs Permedia 2 8MB PCI display card), Windows 95 just doesn't have enough "system resources" for 3D Studio MAX's highly complicated GUI. No matter how good your hardware specs are, Windows 95 still ran out of "GDI resources" because it is the OS's software limitation. At the time Windows NT4 had higher baseline (installation) requirements than Win 95, but the very same PC had no issue when running 3D Studio MAX. 80MB RAM was quite a lot in that era and the mainstream was like 32MB, but regardless of RAM size, Windows 95 just got unresponsive when GDI resources ran out, and it had nothing to do with the GPU's OpenGL specs and such because the GPU has nothing to do with the program's very complicated GUI with a lot of buttons, drop down boxes, dialogs and such.

https://www.techrepublic.com/article/monitor-windows-9x-me-system-resources-with-the-resource-meter/

So people have to realize where the limitations are. Performance and reliability are very different concepts, they may, but not always coexist.
Title: Re: Retune of FLAC compression levels
Post by: Replica9000 on 2023-04-13 11:35:42
On my Ryzen 5850U, building with AOCC 4.0 is the same performance as regular Clang 16.  Building with -march=znver3 or the generic x86-64-v3 doesn't make a difference either.  For 44.1/16 audio, GCC has about a 4 second advantage with my 1.2GB file over Clang with ASM optimizations disabled, and about a 4 second disadvantage compared to Clang when enabled.

Could you please share this AOCC version? It's the official AMD compiler, I am keen on testing it.

Thank you very much.

It seems AOCC is only available for 64-bit Linux.  I don't think I can build for Windows with it either (With GCC, I have to use MinGW to make Windows binaries).  If you're running Linux, I could share the FLAC binaries built with AOCC.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-13 22:44:22
@ktf : Did the retune build do something to the algorithm that selects between apodization functions too?
Asking because I sometimes get a notable difference between 1.4.2 and retune. I didn't say significant in the sense that one should care about it, but of course way more than the seven bytes implied by the vendor string.


As for this:

* Going full -l32 on high sampling rates:
My first reaction was, this will shock those who use "-8e", I checked one high-resolution file that now takes 4x the time of 1.4.2. That is even if the reduced number of apodization functions speeds up -8el12 to half the time of 1.4.2.

But the thing is, -e still has a mission on some high sampling rates, that is the problem. At least it had on 1.4.2, I'll test this one

To take this a bit I ran a five minutes long stereo stupid-resolution 384/32 file (download link), (https://drive.google.com/file/d/1LOmPtNLQRhhmWWUz4ONshPFCZT8X-tI8/view) same as mentioned here (https://hydrogenaud.io/index.php/topic,123025.msg1024985.html#msg1024985). FLAC can compress it to below 40 percent of WAVE, yay.  -l32 makes for bigger files (at three-ish times the time consumption!) but that is not the point here: the point is how the proposed -8 will likely get you complaints from -e users:
at -l 12, a "-e" increases time by a factor of five-ish; at -l 32, the "-e" increases time by a factor of 11-ish

(Actually -8el32 encodes slower than realtime on an i5 CPU. Even the retune, which is three times faster due to ... stepping down the subdivide_tukey I guess?)

-e has its merits at this sample rate: 1.4.2 at -8 vs -8e, the latter saves 7% file size. Would be good if the new order selection algorithm could do something about it - it is easier to reply "then don't use -e" if you can point at it not doing much anymore.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-14 09:54:07
To take this a bit I ran a five minutes long stereo stupid-resolution 384/32 file (download link), (https://drive.google.com/file/d/1LOmPtNLQRhhmWWUz4ONshPFCZT8X-tI8/view) same as mentioned here (https://hydrogenaud.io/index.php/topic,123025.msg1024985.html#msg1024985). FLAC can compress it to below 40 percent of WAVE, yay.  -l32 makes for bigger files (at three-ish times the time consumption!) but that is not the point here: the point is how the proposed -8 will likely get you complaints from -e users:
at -l 12, a "-e" increases time by a factor of five-ish; at -l 32, the "-e" increases time by a factor of 11-ish

(Actually -8el32 encodes slower than realtime on an i5 CPU. Even the retune, which is three times faster due to ... stepping down the subdivide_tukey I guess?)

-e has its merits at this sample rate: 1.4.2 at -8 vs -8e, the latter saves 7% file size. Would be good if the new order selection algorithm could do something about it - it is easier to reply "then don't use -e" if you can point at it not doing much anymore.
I didn't download but from the file name I think it is one of the files from this website:
https://samplerateconverter.com/free-audio-downloads
IIRC they are some really crappy quality junior MIDI sequencing stuff that makes -e shine. Generally things recorded through a mic and delta sigma ADC would have more high frequency noise which make -e much less effective and you can use some cheaper -A options to get better results.
Title: Re: Retune of FLAC compression levels
Post by: Porcus on 2023-04-14 12:11:05
Yes it is from there. I was a bit reluctant to giving it publicity ...

So the size impact of -e was a bit spurious - also indicated by what WavPack does: -hhx4 takes one third off the -hhx size.
On the other hand, all files this resolution have "weird" content. And since -e often does a job at those big Hz'es, I suspect there will be users who routinely throw it in, and they might be in for some surprises by the time impact.

Another example: the 768 kHz Sound Liaison / Carmen Gomes publicity stunt. retune build on some i5:
15 seconds for -8l12
55 seconds for -8el12 so a factor of nearly 4
30 seconds for -8 = -8l32, so -l32 doubles the time
438 seconds (!) for -8e = -8el32.

... size impact here? Fifty parts per million.
Title: Re: Retune of FLAC compression levels
Post by: bennetng on 2023-04-14 22:25:28
I think it is a perfect example to explain where MQA's money has gone: Pay some money so that you can use some better quality music files as demo. Talented musicians can make good music with toy-grade equipment:
https://keenonkeys.bandcamp.com/

For this "Wait for Spring", or "Wait for -e and -l32" on flac 1.4.2:
-8 -b16384 -A "subdivide_tukey(5);blackman;gauss(1e-2);flattop"

Basically, a set of narrower windows to deal with the mostly empty ultrasonic range.