CheckWavpackFiles 2.0 BETA released
Reply #6 – 2008-08-14 13:59:00
There's no reason why it wouldn't work, although for decoding I wonder if the disk I/O would ever be able to keep up (or justify it) It will, as I said, I can get 50-80% CPU utilisation scanning 4 files simultaneously on a quad-core @ 3.4GHz (that figure is a across all 4 cores) - the only reason it's not 100% is that I'm disk starved from the trashing. Frequent disk seeks drastically reduce throughput, as we all know when we get pagefile trashing & everything grinds to a halt. With linear single-file streaming I could easily max my CPU and then some.Also, for encoding (especially the -x modes) doing the same thing would certainly be a gain. Sure, but probably a little more complex (due to not knowing how much compresses into each block).It would probably be a better use of time than going to hand-coded assembler for the single threads (which is another thing I think about), but unfortunately I don't have time at this point to look into either. I am happy to kibitz, however... kibitz? : ) I can provide all the threading and CPU code - in fact if you could split the decoding context into independent block-specific ones (ie. ones that have all the decoding state needed to decode an entire block in isolation), I can do the rest. With multi-cores so widespread now, this not only gets you a massive bang-for-buck, but better yet all apps benefit transparently.