I have never considered the idea of compressing according to the content by giving the right to play with block sizes. Because I always try to do things that are as adaptive as possible and find a middle way according to the context. Otherwise, it would be good to get dozens of different results with a few parameters that will be presented for testing and use the best one. But normal users are not interested in this in normal life. And it is also a time-consuming process. It is very variable according to the situation. Of course, for LossyWav, this may be necessary by nature.
Below is the case where I just halved the block size for mine(HALAC) (from 4096 to 2048). There is an improvement in the results. As I said, in this form, no action is even being taken yet. And unnecessarily fast. The blocks size can be reduced even further, but the amount of data required for entropy encoding is decreasing. This also has a negative effect on compression. It will be more efficient if we group the blocks and put them into coding. And it will also be much more accurate to process as 8 + 8 bits. The only problem for me right now is the processing speed of LossyWav.
Sean Paul (Block Size: 4k -> 2k)
01 - Riot : 15,802,208 -> 15,555,518
02 - Entertainment : 18,723,000 -> 18,365,454
03 - Want Dem All : 15,340,527 -> 15,120,997
04 - Hey Baby : 14,494,502 -> 14,403,168