Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: FLAC v1.4.x Performance Tests (Read 29808 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: FLAC v1.4.x Performance Tests

Reply #300
The manual's exposition is not so clear though:

-e, --exhaustive-model-search
    Do exhaustive model search (expensive!)

OK, so a command called "--exhaustive-model-search" does exhaustive model search (which is expensive), but it does not specify what aspect of the model it exhausts. Maybe the easiest (with maybe slightly less than fifty percent chance that @ktf will have to correct me) way to understand what it brute-forces, is to check what is brute-forced by other switches (-p and -r), and then the additional knowledge that blocking (the -b) is done before the LPC modelling even start, and is not part of the "model search" - and isn't optimized at all in current flac.exe.

To a novice reader it might even be a bit confusing that precision can be set exactly with -q, and partitioning not only exactly but within a range with -r, while the LPC order (history taken into account!) can only be set as a maximum. I'm not saying that allowing a construction like -l 10,12 would be good for an end-user for anything but explaining something an end-user doesn't need to deal with (but it could be fun for testing).


As for error ... the error uses to select model is not the size of the encode; that would amount to brute-forcing - it is what you want to obtain, but not what you want to do, you want something quicker. Hence the question whether there is some better way to do it that is still quick enough. There is some theoretical support for a logarithmic size measure, and since discrete logs can be obtained by bit-shift, it should be possible ... but "theoretical support" does not mean that it improves much on actual data, over a method that has been tweaked to the level where it by and large works quite well.


order of 12 (this is default for compression level 12)
Level -8. (And -7.)