FAAD2 decoding aac to 32-bit float, can someone explain the bit-magic? 2016-02-09 12:51:43 I recently wrote a little AAC decoder using libfaad2. Just to test my work, I compared my decoded files (decoded to 32 bit float RAW) with the output generated by the faad frontend and found some unexpected differences. By comparing the output in a hex editor, I found the faad frontend will actually turn all floats with value -0.0 into 0.0. (It is done here: https://github.com/gypified/libfaad/blob/master/frontend/audio.c#L464, in lines 469-474, where -0.0 == 0.0 is true).Can someone explain to me why this is done? I've tried finding something about it in WAV specifications, but as far as I can tell, it requires IEEE floats (which allow -0.0). Do media players or sound servers not like -0.0 floats?Also, can someone explain the bit magic done to the floats in the lines that follow (473-492)? I can read code, but I can't figure out what (mathematically) those bit-hacks do to the floats. And, if it's not obvious from the answer, why is this done? When I turn all -0.0 floats in my own output into 0.0, the files my program produces (without the extra bit-hacks) are identical to the faad frontend output, so it seems (at least on the files I tested) that the other stuff has no effect.Thanks!