Right now one can only set a bitrate and WavPack encoder will introduce extremely varying distortion.
In can be even demonstrated by encoding a sine sweep with -x3 and lowest possible bitrate. For the most part it sounds okay, but there will be several distinct bursts of noise when the frequency goes higher up.
A reasonable lossy encoder would try to distribute distortion more or less evenly across the record — in order to not spend too much bits on the parts which have less distortion than the worst part of the record.
Given the nature of WavPack, I think something simple could do the job — for example setting a fixed signal to noise ratio as a quality goal, in dB.
For example, a quality setting of 30 dB would mean that at any short part of the record the encoder is allowed to add distortion that's up to -30 dB as loud compared to the momentary loudness of the record at this point in time, but not louder (unless some other settings, such as "pre-quantize", make this goal unreachable at this moment).
When the goal is to have predictable quality level, this will both prevent spending unnecessary bits on super easy parts, and neutralize the most notorious killer samples by spending as much bits as needed to always meet the quality goal.
I worked on a quality-based psychoacoustic VBR mode very early in the development of WavPack 4.0, but it didn't work that great and I found that tuning something like that requires a lot of work and the cooperation of many “golden-ears”.
A constant S/N ratio mode like you suggest would certainly eliminate those “bursts” of noise you describe because it would no longer be tied to the performance of the decorrelator. Unfortunately the format does not lend itself to doing that trivially, and I actually believe that something simple like that would actually not be any better at choosing the optimum bitrate than what we have now, and maybe worse.
For example, for low-frequency sweeps the noise needs to be much lower relative to the signal to remain inaudible (for several reasons), and WavPack's method gives this automatically (because low-frequencies decorrelate so well). A simple S/N ratio control would be too noisy in that case.
I also don't have any free time right now to work on something like this, but that's another story... :)
A simple S/N ratio control would be too noisy in that case.
It could put the signal through a filter (equal loudness curve) that attenuates frequencies that are harder to hear and evaluate the level after that, thus measuring something close to the perceived loudness.
I also think this won't need cooperation of many “golden-ears”, as the equal loudness curve is more or less universally agreed thing…