Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Improve compression efficiency by treating 32-bit fixed point as float. (Read 8826 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Improve compression efficiency by treating 32-bit fixed point as float.

Hello @bryant

https://www.soundliaison.com/index.php/6-compare-formats

The file:
A Fool For You - DXD 352kHz-32bit
Can be losslessly saved as float, probably because the DAW used to render the file uses float as internal format. By using x4 and x6 I got these results with non-audio data stripped:
Code: [Select]
   Length Name
   ------ ----
642175416 a-fool-for-you-carmen-gomes-inc-dxd352-32.wav
341974990 float x4.wv
340610026 float x6.wv
396557238 int x4.wv
395185830 int x6.wv
Is it possible to take advantage of this during encoding, but keeping the format as fixed point during decoding?

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #1
Interesting find! Yeah, you’re analysis makes sense too. The 32-bit integer code in WavPack looks for redundancy in the LSBs, but only constant redundancy sample-to-sample. In this case the number of zeroed LSBs would shift with the sample’s magnitude.

I wonder how many 32-bit files, which are already kind of rare compared to float, would fall into this category.

This would be pretty easy to implement. The only issue is there would be no way to make it backward compatible, which is a big drawback for me, especially considering how long it’s been since I made a decoder-breaking change.

Anyway, thanks for finding this and letting me know, and I’ll give it some thought.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #2
Thanks. I was thinking about WavPack 4 compatibility too. In this case, how about offer a command-line option to let user explicitly convert to float, but keeping all non-audio data intact?

I don't expect this could be done when using pipe: the encoder would need to ensure the whole file can be converted without loss, when it is not possible, the process should pause or quit, and notify the user. This would still be much more convenient than requiring users to manually check for bit-perfectness, converting the file, and worry about if non-audio data is being altered or not.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #3
how about offer a command-line option to let user explicitly convert to float, but keeping all non-audio data intact?
I was kinda thinking one that kills the non-audio data rather than preserving it. Preserving non-audio is for getting the file back bit by bit; removing non-audio (with -r) is for those who don't wish that; this conversion is "even more severe". But then OTOH: does --pre-quantize keep headers? That is even lossy.
The opposite way could be interesting too. I have found in the wild something that apparently was a 16 bit signal opened in some application and saved as 32-bit float (no other processing, no dither no nothing).

This would still be much more convenient than requiring users to manually check for bit-perfectness, converting the file, and worry about if non-audio data is being altered or not.

Hadn't it been for the compression gains, that would have been an idea for wvunpack. It can already do some source-format-override (say with --wav), and it is a more natural workflow to keep the source until you know that you want to change it.
Hadn't it been for the compression gains, yes.

Musing aloud: a "-R" (letter selected for including -r functionality) that takes a numerical argument.
Either
-R 0 = do nothing. (Switch off a previously given -R)
-R 1 = -r
-R 2 = if file is integer, convert it to float if that is lossless (will not be reversible by WavPack 5 and below)
[-R 3: 2&1]
4: if file is float, convert it to integer if that is lossless
8: this bit controls peak-normalization.
16: go ahead do whatever that improves compression

Or a different scheme, keeping "-r" functionality out of it:
-R 0: do nothing.
-R 1 to -R 3: prefer WAVE type 1 to 3 format, amend headers accordingly (type 3 is float and only float, right?) and something analogous for AIFF, with 3 being float. WARNING: will not be reversible by WavPack 5 and below
add 4 or 8: peak-normalize / brute-force "prefer smallest file".

I've also mistaken --normalize-floats for being peak-normalization, so ... maybe suggest something that does precisely that  O:)

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #4
how about offer a command-line option to let user explicitly convert to float, but keeping all non-audio data intact?
I was kinda thinking one that kills the non-audio data rather than preserving it. Preserving non-audio is for getting the file back bit by bit; removing non-audio (with -r) is for those who don't wish that; this conversion is "even more severe". But then OTOH: does --pre-quantize keep headers? That is even lossy.
The opposite way could be interesting too. I have found in the wild something that apparently was a 16 bit signal opened in some application and saved as 32-bit float (no other processing, no dither no nothing).
Anything <= 24-bit should be pretty easy to check and I am keeping these files in my own projects as well: the oldish Audition I use opens float files much faster than 24-bit.

At this point perhaps WavPack can detect uncompressed μ-law and A-law too, which are basically 8-bit floats without infinities and NaNs.

Keeping non-audio data or not of course is a case-by-case and user specific choice. It would be disastrous if for example, DAW projects lose all loop points and regions when importing samples.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #5
At this point perhaps WavPack can detect uncompressed μ-law and A-law too, which are basically 8-bit floats without infinities and NaNs.

Which brings me to a question:
Lossless compressors reject µ-law and A-law in WAVE/AIFF (well at one point there was a compressor which by mistake failed to weed them out), presumably because they won't come out right if they are played back without the expansion. I use "expansion" for "reverse dynamic compression", the second step of "companding" ...
In principle, they could have been compressed. Equipping the file with a flag saying "do not play the stream, read the WAVE header and play as you would play the WAVE", kicking that decoding further down the road, rather than passing "unexpanded" and hence wrong audio down the playback chain.
Question 1: Is there any codec that can do this?
Uneducated guess is that the answer is negative. And:
Question 2: Do I guess correctly that this would be hard to retro-fit into a format, because existing implementations would indeed pass the stream out and play it without the expansion?
Which brings me to:
Question 3: Is it easier for <codec/format X which unpacks to bit-exactly the source file> to implement a flag that prevents playback (but not decoding to file)? If you cannot enforce correct playback, can you prevent playback?

Part of this came out of how @bryant explained that - if I understood it correctly, and that is a substantial reservation - AIFF allows non-integer sampling rate, but WavPack will have to use integer upon playback (and well, what sound card handles float there anyway) even if wvunpacking gets the right thing back. Potentially, if a WavPack file could assign "playback" to be done with a sampling rate of "0" (meaning: wait infinitely long for the next sample, so if you are smart then do not even try!) then the file would be unplayable but decodable.

(What is the use of an unplayable file? Storing it in a checksummed format that can detect corruption, and with tags. Any compression gain would be a bonus.)


Although the market for µ-law or A-law compression (hm, could *ADPCM in WAVE be handled that way too?) might be quite meager to say the least, the idea got me curious. Since memory does not always serve me right, there is even a risk that I might have pestered bryant with the question on some occasion already.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #6
Without getting too complicated, the easiest thing to do regarding the original topic is encoding the input to a WavPack 4 compliant float file and check for bit-perfectness, and by default preserve non-audio data unless being told to strip them. This should be extremely easy to do given the fact that both of them are 32-bit so chunk sizes and such don't need to be changed.

The only drawback I can think of is some devices (e.g. streamers, DAPs) don't support float. A reminder on the help file would be enough.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #7
Yeah, if user wants a functionality to convert if that is lossless - kinda trivial but would require some temporary files: Maybe you don't want to wait for a -hhx6 only to find out that sorry, lossy, deleting - doing that to keep filename.f.wv and filename.s.wv (for float and signed) should be "optional" then?
Meaning, you need a temporary float file to check for losslessness, either uncompressed of a temporary encode with -f?

to a WavPack 4 compliant float file
As far as I understand, the WavPack 5 file format is "WavPack 4 compliant" in the sense that more primitive executables decode them just fine.
(At least as long as you stay clear of features that WavPack 4 couldn't even handle, like huge channel count.)

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #8
For example, Bryant offers several Cool Edit/Audition plugins from the oldest Syntrillium ones to the latest Adobe CC ones, and the plugin that I am using should be WavPack 4 based, it can decode WavPack 5 files and WavPack DSD, but only capable of decoding to PCM.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #9
Yeah, if user wants a functionality to convert if that is lossless - kinda trivial but would require some temporary files: Maybe you don't want to wait for a -hhx6 only to find out that sorry, lossy, deleting - doing that to keep filename.f.wv and filename.s.wv (for float and signed) should be "optional" then?
Meaning, you need a temporary float file to check for losslessness, either uncompressed of a temporary encode with -f?
This shouldn't require big temp files or a lot of RAM if done within the native wavpack executable without pipe. It just needs to run a decoding pass (if the input is a wavpack file) and verify the possibility of lossless float conversion. If the current block fails then immediately quit, or proceed to another file in the queue. This would be as fast as verifying integrity via decoding. Encoding should only start if the first pass is completed.

If the input is already an uncompressed PCM file it is even easier, just parse the whole file and only use float to encode if it is bit-perfect, otherwise fallback to the original input format.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #10
Created a quick proof-of-concept of this for my own curiosity. I was surprised to discover that the 32-bit integer file in the WavPack test suite also has this “feature”, and both that file and the “fool” file improve by about 9% when compressed as float. Maybe most 32-bit integer files are like this? I was amused once when a user requesting 64-bit float support sent me a sample needle-drop file and it was also losslessly representable in 32-bit float!

The easiest approach would be to add an option (e.g., --32bit-int-to-float) that would work with either PCM files or WavPack files and would fail if the operation was lossy, or perhaps optionally just display a warning at the end (e.g., --force-32bit-int-to-float). Starting over on failure would obviously be a complication with pipes, and there's no provision for that behavior now. I think it’s extremely unlikely that a file would only fail near the end; if it’s going to fail it would be right away. Maybe this could also be a way to handle 64-bit floats?

With respect to non-audio metadata the simplest thing would be to discard it (because it’s obviously no longer valid and you wouldn’t want to make a file with it). Of course, most formats can be switched “in-place” from integer to float (with the notable ugly exception of AIFF) so maybe that would be an option, but unfortunately the current architecture makes that ugly. Since 32-bit integers don’t make sense as a DAW format (they don’t clip gracefully and don’t process efficiently), my guess is that this is mostly a “distribution” format where metadata would not be relevant anyway (except for ID3 tags, which this file had).

And yes, WavPack does already handle all the cases where, for example, 32-bit float data was sourced directly from a fixed integer size (which would be commonly found in DAW files).

Regarding μ-law and A-law, I just by crazy coincidence have a μ-law file because my voicemail to e-mail service uses them. I tried compressing that file using a whole assortment of tools including my two general-purpose compressors and WavPack hacked to ignore the format specifier and pack as 8-bit PCM (which it kinda is with a little flipping of values):
Code: [Select]
-rw-rw-r-- 1 david david 183738 Apr 28 16:07 message.wav
-rw-rw-r-- 1 david david 148280 May 11 09:04 message.wav.lzw
-rw-rw-r-- 1 david david 131878 Apr 28 16:07 message.wav.gz
-rw-rw-r-- 1 david david 127799 May 11 09:12 message.wav.newpack
-rw-rw-r-- 1 david david 119673 Apr 28 16:07 message.wav.bz2
-rw-rw-r-- 1 david david 113548 Apr 28 16:07 message.wav.xz
-rw-rw-r-- 1 david david 108888 May 11 15:02 message.wv
I think that an algorithm optimized for μ-law and A-law could improve on that by converting to linear and making the prediction there, and then converting the prediction back to non-linear and entropy encoding the difference. In any event, I can’t think of a way this could be done as a non-breaking change, and so it almost certainly won’t happen. Unless someone wants it…  :)
 

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #11
Created a quick proof-of-concept of this for my own curiosity. I was surprised to discover that the 32-bit integer file in the WavPack test suite also has this “feature”, and both that file and the “fool” file improve by about 9% when compressed as float. Maybe most 32-bit integer files are like this? I was amused once when a user requesting 64-bit float support sent me a sample needle-drop file and it was also losslessly representable in 32-bit float!
Software like Audacity for example, use 32-bit float internal format, but allow saving as 32-bit integer and 64-bit float. On the other hand, SoX command-line tool uses 32-bit integer as internal format, but also allows saving as 32/64-bit float.

Quote
The easiest approach would be to add an option (e.g., --32bit-int-to-float) that would work with either PCM files or WavPack files and would fail if the operation was lossy, or perhaps optionally just display a warning at the end (e.g., --force-32bit-int-to-float). Starting over on failure would obviously be a complication with pipes, and there's no provision for that behavior now. I think it’s extremely unlikely that a file would only fail near the end; if it’s going to fail it would be right away. Maybe this could also be a way to handle 64-bit floats?
The same can also be offered in wvunpack if users want to switch to another 32-bit format during decoding for whatever reasons (e.g. compatibility), and let they know if the conversion is lossy or not.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #12
Software like Audacity for example, use 32-bit float internal format, but allow saving as 32-bit integer and 64-bit float. On the other hand, SoX command-line tool uses 32-bit integer as internal format, but also allows saving as 32/64-bit float.
There was some discussion here somewhere on DAWs using 32-bit integer, but I don't find it. Apart from SoX I don't remember any to name and shame, but it isn't that long since 24-bit integer was teh shitz and fancy names like "DXD" were introduced.
Other applications?


By the way, sizes.
403 375 548 with -hx
394 134 751 with -hhx6, ever so slightly beating every monkey
393 584 015 FLAC -8pe -l32 -b8192 --keep-foreign-metadata.
387 724 880 MPEG-4 ALS -7 -p and that even beats OptimFROG --preset 10. Also the frog throws an error, likely due to some WAVE metadata, and even if using --incorrectheader.

So the big difference down to the 340 610 026 cannot be explained by this being a particularly WavPack-unfriendly signal. Yeah sure you could point out that -hhx4 could often outcompress FLAC (and by more of a margin, Monkey's) on this sort of resolution, but the impact is too big to put it down to that.
A gamechanger ... well admittedly, in the "bragging rights" game, since I guess the overall hard drive cost saved would hardly be worth the effort  O:)

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #13
The same can also be offered in wvunpack if users want to switch to another 32-bit format during decoding for whatever reasons (e.g. compatibility), and let they know if the conversion is lossy or not.
Especially clipping, which is a serious thing.

 

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #14
"bragging rights" game
Another bragging right is: Now WavPack can authenticate the master quality of the hi-res files you purchased!

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #15
Are you saying that some vendor now offers .wv downloads "featuring" the MQA death spasms?

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #16
Are you saying that some vendor now offers .wv downloads "featuring" the MQA death spasms?
No, just want to say that the upcoming update of WavPack command-line tool can check whether the origin of the 32-bit file is float or not. Because some audio interfaces are capable of 32-bit integer recording if done in the right way, e.g. record and edit with Reaper and set the recording/bounce format to 32-bit integer.

Related topic:
https://hydrogenaud.io/index.php/topic,114816.msg1026865.html#msg1026865

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #17
So here are some "integer friendly" floating point mixing techniques that would never happen in real world. For example, I think the song "The Saga Of Harrison Crabfeathers" sounds great:
https://cambridge-mt.com/ms/mtk/#Araujo

Click the "223 MB" link to download the multitrack 24-bit archive. Basically, drag all files into Audacity, select all tracks and mixdown to stereo directly without any adjustment.
X

Naturally, after mixdown, there will be some overs. Typical users may either do a peak normalize or apply a limiter to deal with them, but in this case, open the Nyquist prompt and apply an integer friendly gain value to the mixed track:
X

Then render to a 32-bit float file. Now the file can be losslessly saved as fixed point.
Code: [Select]
-------------------------------------------------------------------------------
E:\download\01_KickIn.wav
00:03:26.0606122 = 18174546 samples / 2-ch @ 44100 Hz
32-bit floating point
          Ch    Position     Value                     dBFS
Maximum   0     7680784      0.7279655933380127        -2.757782935073358
Minimum   0     5459222      -0.92048293352127075      -0.71968518494290912
Abs.min   0     152          7.4505805969238281e-08    -142.55619765854985
Round Trip: 27
-------------------------------------------------------------------------------
oldsCool 1.0.0.4 read-only mode
Code: [Select]
All tracks decoded fine, no differences found.

Comparing:
"E:\download\8p.flac"
"E:\download\gx4-32f.wv"
Compared 9087273 samples.
No differences in decoded data found.
Channel peaks: 0.920483 (-0.72 dBTP) 0.919624 (-0.73 dBTP)

Comparing:
"E:\download\01_KickIn.wav"
"E:\download\gx4-32i.wv"
Compared 9087273 samples.
No differences in decoded data found.
Channel peaks: 0.920483 (-0.72 dBTP) 0.919624 (-0.73 dBTP)

Now the float file is bloated.
Code: [Select]
  Length Name
  ------ ----
72698272 01_KickIn.wav
35661546 8p.flac
39560446 gx4-32f.wv
35732920 gx4-32i.wv

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #18
Of course µ-law and A-law are the key to future world domination ... just kidding, the following question is out of curiosity:
I think that an algorithm optimized for μ-law and A-law could improve on that by converting to linear and making the prediction there, and then converting the prediction back to non-linear and entropy encoding the difference.
Is that obvious?
It would be if the integer-encoded LPCM (before the µ-/A-law transformation) were AR(n) or the suitable generalization - but that assumption doesn't hold up. It surely has enough of a linear component to it, that linear decorrelation captures big %%s of the size. (... I don't even know WavPack's internals ...), but you would expect the µ-/A-law byte stream to share that property to a certain degree too - larger or smaller.
Sure an algorithm specifically optimized for µ-/A-law would be expected to outperform one that has already been optimized for LPCM, but is there any reason it would be out of that particular measure?

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #19
So the more I thought about just converting 32-bit integer files to float files the less I liked it. It would be a mess while still leaving some situations unhandled.

Then I started thinking it would be best to byte the bullet and break decoders for this. But then I thought of submitting patches to FFmpeg and Rockbox and the fact that some hardware devices (like I describe here) would never be compatible. I haven’t broken decoding for almost 20 years, and I decided I didn’t want to do that either.

But I didn’t want to leave 9% on the table either, even though I think these files are broken and have no good reason for existing.

Finally I came up with a solution that has very minimal downside. The way WavPack stores 32-bit formats (either float or integer) is first by converting to normalized 24-bit integers (which is the most that the regular WavPack integer code can handle). This is accompanied by a metadata chunk indicating by how much the values are normalized (shift count) so that this 24-bit audio can be converted to either 32-bit float or integer. Of course that is mathematically lossy some of the time, but obviously it’s for all practical purposes lossless. In fact, a long time ago I actually had a -p option (for “practical”) that was just that. Since each block can have a different normalization it retains the dynamic range / clipping advantage of float, and the compression performance is significantly better than lossless.

Additionally, for true lossless mode there was a bitstream that provided the missing data to get to the complete 32-bit float or integer values. This stream resided in the main file for regular lossless and in the "correction" file for hybrid lossless. For lossy modes that was the first thing to go, and then the hybrid mode would simply act on the 24-bit data to get to the target bitrate.

So my idea for handling these float-derived integer files is to create a new, smaller bitstream for the “completion” data with a new metadata ID. Old decoders will simply ignore it and consider the file as “lossy” (although we’re still talking less noise than the best DACs). Updated decoders will recognize the new ID and do the lossless decoding. Rockbox never did anything with that info anyway, so it will be unaffected (what’s Rockbox going to do with more than 24 bits...I think it is 16-bit internally). The new format would only be used when it actually makes a difference, and we get the extra compression without breaking anything.

I’ve already created simulated files and verified that Rockbox and VLC have no issues, so I think it’s going to work. I still have to try them on my Plenue and my Lotoo, but I don’t expect any surprises.

As for the u-Law and A-Law, @Porcus, when you listen to u-Law as linear (once you’re corrected for the crazy value order) it sounds distorted compared to the properly decoded version (and louder, obviously, but that’s understood). That distortion would hurt compression compared to doing the prediction in the linear domain, but I don’t know whether that would be a little or a lot. Unless it gives a big improvement though it would not be worth the complexity...just convert to signed 8-bit and be happy. Like DSD, applications that don’t understand the formats get PCM.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #20
byte the bullet
Good to know that there is a way to improve compression without breaking compatibility with hardware devices so there is no need to byte the bullet :D

Quote
But I didn’t want to leave 9% on the table either, even though I think these files are broken and have no good reason for existing.
Agreed. In fact, another reason that I like about WavPack's ability to preserve the whole file instead of only audio data is that I can keep these files as specimens. So it would be great if the upcoming decoder can decode to the exact same file.

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #21
So I have implemented this optimized mode for 32-bit integer data and it seems to work as expected, consistently improving compression by around 9% for audio that is sourced from 32-bit float. Thanks again @bennetng for the tip!

After a little thinking I realized that there’s actually a symmetrical relationship with conversions between 32-bit float and 32-bit integer; both conversion directions are lossy (even without clipping). But once the conversion has been done once (either way) you end up with data values that can losslessly be represented in either format and converted back and forth forever without loss (assuming a non-broken conversion algorithm). And in theory, since they’re essentially the same values, either should losslessly compress by the same amount.

However I noticed that the target DXD file now compresses almost half a percent better in 32-bit integer than in 32-bit float. So I needed to added some similar optimizations to the 32-bit float code to properly handle that case, and to also handle the case of a reduced-width mantissa. After that, the 32-bit float version of this file is only 2216 bytes larger than the 32-bit integer version (0.0003%).

The new version is attached (the option is --optimize-32bit), and also has the multithreading improvements discussed in this thread.

As for the μ-law and A-law data, I tried my idea of using the WavPack decorrelation code to generate predictions in the linear domain, convert them back into an 8-bit non-linear code, and store the non-linear deltas using the standard WavPack entropy encoder. The improvement was decent (about 5% to 10%) and I was able to get the message file above down to 97274 bytes.

However, I also discovered that lossless compression of non-linear audio is already a thing. Our friend Florin Guido wrote a paper about this and ITU-T Recommendation G.711.0 covers lossless compression of μ-law and A-law audio signals. I didn’t build the code supplied there, but looking at the test vectors it seems that it compresses about the same as the best I achieved, but with much smaller frames. So my curiosity is satisfied, and I don’t think this is anything I am going to spend any more effort on. The morbidly curious can take a look at the branch on GitHub.   :D

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #22
Thanks Bryant, here some benchmarks using "A Fool For You - DXD 352kHz-32bit":

Encoding in seconds
09:36 optimize
07:84 normal
96:55 optimize x4
95:12 normal x4

Decoding
04:41 optimize x4
04:21 normal x4

I also tried to apply 32-bit integer fade to the file to make it partially contains 32-bit integer exclusive values and found no issues.

The example in Reply #17 has also been improved.

Is it true that decoding will be lossless (or as lossless as possible within limitations of individual third party programs) in updated Audition plugins and third party programs that properly utilize the updated decoder?

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #23
Thanks for the benchmarking; these match my results. Unfortunately there is a little more work for this during decoding to check the magnitude of the result to determine how many bits to read from the “make up” stream (previously we always read a fixed number of bits). This penalty does not occur for floats.

As for the compatibility, any application that is built with (or uses dynamically) a new libwavpack will transparently get the correct fully lossless decoding. Applications that use older versions of libwavpack (or FFmpeg) will decode such files as lossy. The reason this doesn’t bother me too much is that calling them “lossy” is rather academic. Each frame is stored as normalized 24-bit, so the difference between the lossy and lossless versions is always going to be around 150 dB down, and of course except when using those crazy 32-bit DACs, the data going to hardware is going to be identical.

I’ll eventually build a new Cool Edit / Audition 1.0 – 3.0 filter. I’m not sure about the later Audition versions (CS and CC) because I can’t even test those any more and I get the impression there’s not a lot of development going on there (and since Audition immediately converts to float anyway, there’s even less difference). I'll also submit a patch to FFmpeg, but that doesn't mean it will ever go in.

Using the transcoding option it’s possible to switch the optimization on and off, so there’s no real danger of these files becoming obsolete. That said, I would not use this unless the final use case was well defined, and I haven’t decided yet whether to include the switch in the help / man page in the first release, or just keep it "experimental" for now (but the decoding part is so simple that I'll definitely leave that in there).

Re: Improve compression efficiency by treating 32-bit fixed point as float.

Reply #24
Thanks for the explanations. I am using Audition 1.5 released in 2004 and for more complex stuff I can use Reaper so it should be lossless in future releases.

I hate the fact that the Audition FLAC filter I am using was released in 2007 and it cannot open 32-bit FLAC files at all.