I know an open source project can become much of a pressure once it starts becoming popular , you should read the flames we receive when a new eMule version screws up something and ppl lose their partial Anime XXX downloads.Best wishes to your family, stay away from public transportation and I hope you find a job (one that allows some free time for wavpack though:))
The noise really shouldn't be any more audible in quiet parts than in louder parts because the noise is always scaled to the signal (lower the level 6 dB and the added noise drops 6 dB) and at very low levels the coder will actually go lossless if it has enough bits to do so. What I think is that the noise is more audible when there's less going on in the music (more "air" around the instruments) and the noise is less audible when there's stuff going on all up and down the spectrum. In this way it really works the opposite from conventional codecs which have the worst time with complex music but shine with simple stuff because they can pour all bits into the "active" subbands. Perhaps den can comment on this as well.
Certainly Wavpack is getting a bit more attention, but hopefully David doesn't have anything too much to worry about. His cat's appearance is getting quite well known though, so it may have to hide from public view, should any problems arise...
In the hybrid mode the user's kbps number is converted to a number of bits per sample (for example 320 kbps = 3.63 bits/sample) and we only store the residual with as much resolution as we can given that average number of bits. So, if the error is running with an average magnitude of 100 and we are allowed 3.63 bits per sample, then we can store the errors with an accuracy of about +/-20. Note that if a big error comes along we use more bits to store that sample while samples close to zero require fewer bits, but every sample is stored with the same accuracy and we achieve the average bitrate. If a transient comes along and the average residual value goes up suddenly, we will store the first few with a lot of extra bits to maintain the accuracy, but then the exponentially lagging average will start going up and we will start storing with less and less accuracy until we hit the target bitrate again. When the average is falling (after the transient) we will be storing fewer bits because the average will be high (it always lags) and this will balance the extra bits we stored at the beginning. It's actually pretty interesting how it can maintain the average bitrate to within about 1% over the long term even though it's completely open-loop (no feedback).