Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
2
FLAC / Re: FLAC v1.4.x Performance Tests
Last post by bennetng -
Is it obvious that you should scan the entire file, when blocks are encoded independently?
Then the encoder would need to check for such redundancies for every block of incoming data, would slower than a full file scan.

Quote
Anyway, once someone wants to make anything out of a two-pass approach, then one could very well take a flac file as input
Then the previously encoded flac files can be used as a pass, but encoding from PCM or transcoding from other formats won't benefit from it.

So, who wants to do it? Multipass encoding is quite common in video codecs, which are usually lossy.
3
FLAC / Re: FLAC v1.4.x Performance Tests
Last post by Porcus -
Some characteristics of a file can be obtained in a much cheaper way by scanning the whole file, like EBU loudness stats and this kind of resampling. It should be possible to make a separate app to gather these information then generate a suggested command-line batch file to feed the encoder.
So, those with skills, passion and patience can try to make such a 3rd party tool. Even if it is being built into the main project, it still requires a 2-pass approach.
Is it obvious that you should scan the entire file, when blocks are encoded independently? Sure if you find patterns that way, but ...
Anyway, once someone wants to make anything out of a two-pass approach, then one could very well take a flac file as input, read the predictor vector, and rather than starting to encode from scratch, (1) try to improve on the predictor that is already stored, and (2) keep it if one doesn't find anything better.

Cf. ktf's attempt at IRLS post-processing: IIRC, if the reweighting pass would not improve, it would be discarded in the end - unlike what I ahve seen with ffmpeg, where running more than a few passes would often make for worse compression.
6
FLAC / Re: FLAC v1.4.x Performance Tests
Last post by bennetng -
Some characteristics of a file can be obtained in a much cheaper way by scanning the whole file, like EBU loudness stats and this kind of resampling. It should be possible to make a separate app to gather these information then generate a suggested command-line batch file to feed the encoder.
So, those with skills, passion and patience can try to make such a 3rd party tool. Even if it is being built into the main project, it still requires a 2-pass approach.
7
FLAC / Re: FLAC v1.4.x Performance Tests
Last post by Porcus -
It is pretty easy to kill -e too.
Pretty easy to generate signals where -e doesn't make any improvement? Sure.
But to find a model selection algorithm that (for your typical high resolution download from artist x or label y) makes -e redundant - that is something else.

As for what my computer is going to run overnight: I had the files initially written to flac -1, and encoding as -8b<N> -A <your choices> seems to save more %%s for 192 than for 384 or 96. Whether that is because the model selection works well up to a certain rate and bad from there, I don't know.
9
FLAC / Re: FLAC v1.4.x Performance Tests
Last post by bennetng -
It is pretty easy to kill -e too. The plot is configured in a way that signal below 16-bit noise floor appears black:
X
XX

-8
23.6 GB (25365043448 bytes)

-8e
23.5 GB (25329260329 bytes)

-8b8192 -A "subdivide_tukey(3);blackman"
23.4 GB (25181478740 bytes)

So, a 16/176.4 conversion using shaped dither makes -e a waste of time.
10
Other Lossy Codecs / Re: Descript Audio Codec (.dac) - 90x smaller than .wav?
Last post by Porcus -
90x smaller than wav, huh?  Assuming they mean 16/44.1 PCM wav files, that'd be somewhere in the ballpark of 16 kbps, right?
Yeah, except: the sample files are mono.
So that means CDDA encoded at 16 kbps as dual mono, without any stereo decorrelation strategy. I have not bothered to look up whether they have any stereo decorreleation algorithm (yet), but obviously that is room for improvement - and also an opportunity to spend more processing power.

I'm going to go out on a limb and say it either sounds bad
Well you can test it ... ? Although the samples are not that interesting ...
or is completely impractical for most use cases.
As of now? Sure.