Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: wvCheck S/N analysis tool (Read 3976 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

wvCheck S/N analysis tool

wvCheck

wvCheck shows up spots within wavPack lossy files with a bad S/N ratio.

Unzip wvCheck.zip to any folder and put the wvs together with the correction files into subfolder wvPack.

S/N ratio is measured here simply as the RMS of the signal devided by the RMS of the error (=original - encoded signal) over a 220 sample window (roughly 5 msec). Both original and encoded signal are bandpassed 800-10000 kHz before processing to concentrate on the frequency range where our ears are most sensitive (and to avoid letting wvCheck report a good S/N ratio due to just energy rich low frequencies).
Bandpass behavior can be changed in the DoUnWv.bat, and there's a wvChech.ini for configuring S/N reporting thresholds:

[Precision]
min_S/N_allowed=64
min_S/N_allowed_HiVolume=96

[PrecisionDetails]
consecutive_hurting_windows_allowed=2
HiVolume_start=500
windowErr_allowed=4

[Options]
generateErrfile=0
checkForJustOneError=0
startCheckWithSecond=0

There's a distinction between 'normal' and 'high' volume (border line is given by HiVolume_start).
Limits for allowed S/N are given by min_S/N_allowed and min_S/N_allowed_HiVolume.
There's also a very low allowed error floor according to windowErr_allowed.
S/N ratio is not necessarily reported if just one 220 sample window is considered bad. Instead a consecutive_hurting_windows_allowed number of consecutively hurting windows is allowed before bad S/N is reported (more correctly: if a consecutive_hurting_windows_allowed number of consecutively hurting windows is
found this is reported as a spot with bad S/N).
There is an option to have the error file created (in case of generateErrfile=1). So we can listen to the error wav file of the tracks undern investigation.
Setting checkForJustOneError=1 makes wvCheck stop after the first bad S/N was found.
Setting startCheckWithSecond to a specific second (1 digit after the period allowed) lets wvCheck begin with a specific position.

The protocol (onscreen and in protocol.txt) shows for each track the number of bad S/N situations together with the spots with the
worst S/N within normal as well as high volume class.

A bad S/N at a spot doesn't mean that there is an audible issue at that spot.
On the other end very high demands on S/N do guarantee transparency.
I did a lot of listenings tests with the configuration above (and a lot of other ones) and from this experience I'd say this configuration is
sufficient to make good quality pretty sure - though judging merely from the S/N required it's not very demanding.
It is bit of a compromise but pretty much on the demanding side when it comes to practical usage:

With popular music and using wvPack in the 300-400 kbps range nearly with any sample there are some hundreds to thousands of S/N issues according to this wvCheck configuration - and most of the time the reported spots don't have an audible impact (according to the restricted number I've checked).
So wvCheck is hardly usable as a global quality control (with music coming from few instruments/voices the situation is better).
The situation is much better when done locally within the encoder, cause hundreds or even thousands of bad S/N spots just means few seconds of music in total, and applying locally a very seriously higher bitrate (in many cases necessary to get at a good S/N ratio) should not increase total filesize by much.

So wvCheck is something like an analysis tool of restricted practical value.
I was thinking a lot about using wvCheck in the one or other relaxed way for controlling quality as a user, but in the end I came to the conclusion that the only useful solution is to do such a quality control within the encoder. Maybe my simple analysis mechanism can help a litte bit developing such an encoder quality control.

I did a lot of listening in the past days and weeks trying to find my way how to encode for my DAP, and in the end I'll use -fb350x6s0 after doing a 16.5 kHz lowpass as I don't care about higher frequencies. The lowpass kind of softens the differences from sample to sample thus making it easier for the predictor to work well. The effect is significant however only in rare situations. Furious gets to the edge of transparency (for me) this way using -fb350x6s0, and even badvilbel becomes acceptable.
lame3995o -Q1.7 --lowpass 17