1
Lossless / Other Codecs / Re: HALAC (High Availability Lossless Audio Compression)
Last post by Hakan Abbas -On the other hand, neither too small nor too large blocks for other statistical entropy encoders (Huffman, Arithmetic Coding or ANS) do not usually do good results. Of course, I'm talking about "-order 0". Because there is an additional load for each block. That is, it is more than Rice coding. It also reduces the process speed. If larger blocks are selected, the alphabet may grow this time (there may be an increase in the number of symbols). In other words, the distortion in the distribution may become more uniform, which is not desired.
This is the case for HALAC for now. 4096 samples in -normal mode and 8192/16384 samples are used in -fast mode. It may also be in any size, provided that it does not exaggerate. Statistical entropy encoders are not very efficient on audio data, contrary to popular belief. It is not easy to make it efficient. This means an extra processing load.
I don't share the results of the things I don't normally complete, but I tried to roughly show the difference between the following Rice coding (my own implementation) and the ANS coding. The Rice parameter was calculated very simple. Even if the situation does not change in terms of speed, this is not valid for all kinds of audio data in terms of compression rate.
Code: [Select]
BUSTA RHYMES(i7 3770k, 20 tracks, Total 829,962,880 bytes)
HALAC NORMAL 1 Thread (ANS) : 4.32 sec, 574,161,601 bytes
HALAC NORMAL 1 Thread (RICE): 3.66 sec, 567,469,774 bytes