Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
1
3rd Party Plugins - (fb2k) / Re: 'Audio MD5' component (foo_audiomd5) != 'verify integrity'
Last post by Porcus -
With that option enabled the results should match native MD5 fields in various lossless formats.
Users may take note of ("instead of worrying about") some differences here.
* Monkey's is an exception - it calculates on the encoded stream, but not the way foo_audiomd5 does.
* FLAC calculates MD5 from a signed little-endian representation of the audio - WavPack uses source file representation. So for WAVE input > 8 bits, and AIFC-sowt input, they agree. When you feed WavPack a big-endian source file (AIFF, AIFC-none, CAF-big) it calculates big-endian without translating, and when you feed WavPack an 8-bit WAVE it calculates unsigned without translating.
8
WavPack / Re: Improve compression efficiency by treating 32-bit fixed point as float.
Last post by bennetng -
As for the float version, the improvement in float files that were derived from 32-bit integer files is about 0.5%, and these are probably not that common. Except maybe files directly converted from 32-bit ADCS? The situations where it makes a big difference is files with mantissas truncated to less than 24 bits, and in these cases the improvement can go over 10%. But I have never seen files like this in the wild, which is why I’m more inclined to not include this feature in the next release.
Recently, "32-bit float ADCs" are somewhat popular among field recorders, but they are basically stacking multiple traditional fixed point ADCs together using a floating point DSP after digitization, and the floating point math will output float data which cannot be optimized. Search for patent US9654134 for one of these implementations, as well as the files below for some examples:
https://www.sounddevices.com/sample-32-bit-float-and-24-bit-fixed-wav-files/

Other vendors like Tascam and Zoom also sell floating point recorders. I don't have sample files to try out but I guess the recorded files cannot be optimized as well. So not including this optimization at this moment could be a correct decision.
9
3rd Party Plugins - (fb2k) / Re: 'Audio MD5' component (foo_audiomd5) != 'verify integrity'
Last post by Case -
FLAC computes the MD5 checksum from the raw uncompressed PCM data it sees as it is encoding the file. Audio MD5 component calculates its checksums from the compressed binary data on the disc, but only from the actual audio bits. That is why it uses ffmpeg.exe, it parses the file format and skips all non-audio related bits like tags.

There is nowadays an option in advanced preferences to change the lossless checksum behavior to use the decoded output. With that option enabled the results should match native MD5 fields in various lossless formats.

I finally added some extra info on the component's download page too. The component originated from a request on IRC so people it was made for knew what it is about. The rest of the world not so much.
10
WavPack / Re: Improve compression efficiency by treating 32-bit fixed point as float.
Last post by bryant -
Quote
If we postulate that 32-bit integer is "kinda stupid anyway", then it might be a good idea to stick to handling the waste those create, keeping compatibility for everything else.
Does WavPack's compatibility policy extend to 5.70 decoding whatever 5.66 can encode?  :))
Of course, since this version has been out for a while, and the “help” display mentions nothing about it being experimental, I will leave the decoder portion in so as not to obsolete any existing files, even if I decide to remove the encoder option.

The small amount of overhead could be due to the creation of an additional data stream. Notice in your test data that the overhead is proportional to file size, if it is done on per file basis the increment should be fixed.

I think it is like adding a flag in each frame and expecting optimizable incoming data, but when there is none the flag itself will take some space, and there could be some technical difficulties to remove the flag and reclaim the space after knowing the frame cannot be optimized.
Yes, this is exactly right. The new format adds 2 bytes per frame even if it achieves nothing. It would be possible to back up and rewrite if I detect that there was no improvement, but that would slow everything down significantly, and there was actually another reason to not do that. If the new format was invoked, I wanted a frame to indicate it as soon as possible, otherwise it would increase the likelihood of a file being incorrectly identified by an old decoder as lossless that was actually lossy (if the first occurrence of the improved stream did not occur until well into the file). In any event, the loss is very tiny.

As for the float version, the improvement in float files that were derived from 32-bit integer files is about 0.5%, and these are probably not that common. Except maybe files directly converted from 32-bit ADCS? The situations where it makes a big difference is files with mantissas truncated to less than 24 bits, and in these cases the improvement can go over 10%. But I have never seen files like this in the wild, which is why I’m more inclined to not include this feature in the next release.