Just thought I'd report, and also be of help to people who are thinking of reencoding their whole library from -c to -cc...
I have a total of 7,538 WV files (+WVC) and I was able to save 1,485,649,985 bytes (about 1.5GB) of data just by using -cc instead of -c. I know this might be a small amount for most people, but it matters to me.
My library statistics:
Total length: 1wk 6d 17:58:27.381
Sample rate: 44100 Hz (89.8%); 48000 Hz (9.6%); 96000 Hz (0.3%); 88200 Hz (0.2%)
Channels: Stereo (97.4%); Mono (2.6%)
Bits per sample: 16-bit (98.6%); 24-bit (1.4%)
The reencoded size is: 28.6GB
The track that lost the most bytes after being recompressed with -cc is:
K:\Kylefile\Music\Sampled Music\Rock\before light\02 decide.wv (24-bit 96KHz Stereo) (shrunk 17MB)
K:\Kylefile\Music\Sampled Music\Rock\before light\02 decide.wv (24-bit 96KHz Stereo) (shrunk 16MB)
K:\Kylefile\Music\Sampled Music\Rock\before light\12 before light.wv (24-bit 96KHz Stereo) (shrunk 15MB)
The track that lost the least bytes after being recompressed with -cc is:
K:\Kylefile\Music\Sampled Music\Chiptune\Antialias XM\00 Sleeping waste.wv (16-bit 48KHz Stereo) (gained 2MB) (Shouldn't it become smaller after -cc?)
K:\Kylefile\Music\Sampled Music\Rock\before light (CD)\11. breath.wv (24-bit 96kHz Stereo) (gained 573KB)
K:\Kylefile\Music\Sampled Music\Rock\before light (CD)\06. weep.wv (24-bit 96kHz Stereo) (gained 523KB)
On average, I saved: 197kB for every file.
Just posting here, in case someone is thinking of reencoding their whole library using -cc.
Happy WavPack-ing :D
My library statistics:
Total length: 1wk 6d 17:58:27.381
Sample rate: 44100 Hz (89.8%); 48000 Hz (9.6%); 96000 Hz (0.3%); 88200 Hz (0.2%)
Channels: Stereo (97.4%); Mono (2.6%)
Bits per sample: 16-bit (98.6%); 24-bit (1.4%)
The reencoded size is: 28.6GB
Under thirty gigabytes for two weeks of music? Sure you did not mean "128.6"? What is the material?
Probably it's only lossy part, without correction files?
Under thirty gigabytes for two weeks of music? Sure you did not mean "128.6"? What is the material?
LOL My Python script didn't take the .WVC into account
Running the fixed Python script gave 137,239,088,141 bytes (approx 137GB) total, including both WV and WVC.
Sorry for the confusion. Will edit the original post.
EDIT:
Couldn't edit original post.
BTW, source material is CD, and genre is mostly rock and metal, with chiptune and EDM and some other genres.
Besides that, there were no other errors in the numbers.
So you saved around 1.1 percent of the encoded size, which would be around 0.7 percentage points of the unencoded since it is pretty much CDDA on average. So How much is 0.7? You can look at these charts:
https://xiph.org/flac/comparison.html , bottom
https://postimg.org/image/70wmq65nz/
The latter was done on more rock-focused music: https://hydrogenaud.io/index.php/topic,97310.0.html
So you saved around 1.1 percent of the encoded size, which would be around 0.7 percentage points of the unencoded since it is pretty much CDDA on average. So How much is 0.7? You can look at these charts:
https://xiph.org/flac/comparison.html , bottom
https://postimg.org/image/70wmq65nz/
The latter was done on more rock-focused music: https://hydrogenaud.io/index.php/topic,97310.0.html
Not to mention the sheer amount of time he/she spent on that endeavour.
True, the difference is almost nothing XD
Reencoding took about a week to finish.
But as long as it's compressed as best as it can (I mean, -ccx6 but not -h) I don't really mind the wait--I already know that WavPack performs a bit worse than FLAC especially when on hybrid lossless mode (interestingly enough, WavPack seems to be better on non-CDDA audio but I did not perform extensive tests on this, so I can't be sure). However, WavPack's hybrid mode is very useful for me--that means I can transfer songs to my phone without reencoding them (and I don't mind the 200kbps audio quality of WavPack--I can't notice the difference at all anyway when I'm just listening casually).
Also, I just have an Intel Celeron N2830-based laptop, so it's pretty slow, but I'm not in a hurry anyway XD I made a Python script to convert the files for me, and I can pause and resume conversion anytime while keeping progress and computing statistics for me.
Also, that's true, I saved almost nothing by doing these--just a mere 1.5GB, but 1.5GB is pretty big for me, since my external hard drive is already running out of space.
Not to mention the sheer amount of time he/she spent on that endeavour.
Yeah. All these testings take time, so linking to them hopefully means the information will be valuable to more people.
Hopefully, one got use of the heat the CPUs generated (that goes for you as well, OrthograpicCube) ;)