Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: MP3 from 16-bit WAV vs from 32-bit WAV (Read 28952 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

MP3 from 16-bit WAV vs from 32-bit WAV

Is there a practical difference between LAME-encoded MP3 audio created from a 16-bit WAV source and one created from a 32-bit WAV source? I assume LAME has to "go down" to 16-bit anyway if the source has more bits, and if so, does it apply dithering of any kind?

As a music maker, would it be more prudent for me to export to 16-bit (with more control over dithering etc. inside my DAW) then encode or 32-bit then encode? The former option seems to offer potentially better quality, but does it?

Hello to everyone btw, first post.

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #1
An MP3 file does not really have a bitdepth internally.
"We cannot win against obsession. They care, we don't. They win."

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #2
Thanks, noted. I thought it operated at 16 bits internally, at least at higher settings.

What does it mean for the problem in question though?

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #3
Any sounds encoded by bits less significant than 16 will be, first of all, under the noise floor if you use 16 bit output to your sound device.
Also I assume that this sounds can be cutted by psycho-acoustic model or some another "feature" of lossy codecs.
🇺🇦 Glory to Ukraine!

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #4
Hello to everyone btw, first post.
Hello and welcome!
You might find this old discussion interesting:
Noise-shaping curves and downstream lossy compression
btw, the most annoying result of all is to have dither noise which is partially encoded, because it just "tickles" the ATH curve in the encoder. That sounds really annoying, if you turn the volume up loud enough to be able to hear it. It's a great potential reason to avoid ATH-shaped dither if you intend to mp3 encode something - but it's also a reason to avoid dither altogether. I came across it in my mp3 decoder tests, where I stumbled onto some of these issues and tried to examine them in detail...
http://mp3decoders.mp3-tech.org/lsb.html
http://mp3decoders.mp3-tech.org/24bit.html
http://mp3decoders.mp3-tech.org/24bit2.html
[/size]

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #5
Thanks very much! Looks like 32-bit it is then.

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #6
Thanks very much! Looks like 32-bit it is then.
To prevent  clipping[/color], normalize before encoding (just as you would when downsampling to 16-bit).  32-bit WAVs can go over 0dB, but apparently MP3s cannot.

I just tried an experiment...  I created a +6dB sine wave and saved it as IEEE 32-bit floating-point WAV.    I encoded it to MP3, and when I opened the MP3, it was clipped at 0dB.

(For this experiment, I used GoldWave, LAME and LamedropXPd.) 



MP3 from 16-bit WAV vs from 32-bit WAV

Reply #7
It is possible to encode +6 dB to MP3 without clipping. And of course MP3s CAN exceed 0 dBFS.


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #9
People can't differentiate 16-bit WAV from 32-bit WAV in almost every case. It follows that MP3 would be no different.


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #11
People can't differentiate 16-bit WAV from 32-bit WAV in almost every case. It follows that MP3 would be no different.

Perhaps, perhaps not.  There is no penalty in encoding a 32-bit source.  In fact, I would submit it is more efficient since you're skipping the interim step of converting it to 16 bits.  I've already demonstrated elsewhere that mp3 is easily capable of recreating signals well below -96dBFS.


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #13
By coincidence I accidentally rendered a project mix to 32-bit and when I went to encode it with LAME still had no idea it was 32-bits. I think it might be worth stressing that converting CD material to 32-bit then encoding to MP3 will probably have zero benefit.

I assume FB2K converted to 24 bits before sending it to LAME encoder. I am not sure what benefit float input could have.

As for the clipping, doesn't every lossy encoder decode high energy waves so they become rounded above 0dB?
"Something bothering you, Mister Spock?"


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #15
The source material IS 32-bit, that's the internal resolution of my DAW and the highest quality export option. I believe Destroid's comment ("(...) converting CD material to 32-bit then encoding to MP3 will probably have zero benefit") was of a "by the way" kind. In my case we're dealing with a true 32-bit source.

@DVDdoug: I always have a -0.1dB limiter on the master bus in case of any random peaks, so I believe normalization isn't necessary. Thanks for the reminder though.

Regarding mp3s going over 0dBFS - I experience this all the time, especially when encoding my wav masters. The masters peak perfectly at -0.1dB, V0 mp3 goes a tiny bit over 0dB, V2 higher still, V5 is blatantly at +3dB or more. Probably just the way LAME works.


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #17
Regarding mp3s going over 0dBFS - I experience this all the time, especially when encoding my wav masters. The masters peak perfectly at -0.1dB, V0 mp3 goes a tiny bit over 0dB, V2 higher still, V5 is blatantly at +3dB or more. Probably just the way LAME works.

Generally this is the result of the lowpass filter (which can increase the tops of peaks). The lower the lowpass filter, the more the potential increase in signal maximum.

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #18
Ah, good to know. Makes perfect sense too.


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #20
I believe, before discussing the practical difference between MP3s created from 16 and 24 bit sources, it would be worth coming to a conclusion whether there is practical difference between listening to properly downsampled 16 and presumably original 24 bit sources. I have yet to see any remotely reasonable double-blind test that showed there was any (in large part, I think, because the signal-to-noise ratio of the hardware used in testing consumed the audible difference even if there was any).

But even if we accept that there is some teeny-tiny difference that, at times, might be audible between two lossless signals at different bit-depths, this difference will be further consumed by quantization noise and other compression artifacts that will be there in the signal whether you want it or not. And if you can hear the difference between 16 and 24 bits you are bound to hear these artifacts (a much more obviously audible thing, if I may point out) as well, and at this point you may as well forget the whole idea.

As such I think it may only serve theoretical curiosity in regards to, say, compressibility or something like that. Which, for me, would indeed be interesting to know. :)
Infrasonic Quartet + Sennheiser HD650 + Microlab Solo 2 mk3. 


MP3 from 16-bit WAV vs from 32-bit WAV

Reply #22
Exactly.

Although, since LAME can work on all 32 bits (thus not reducing the source to 16 bits, thus not applying any sort of dithering etc. which - as I had originally thought - it could be applying), I'd assume the issue has pretty much been resolved?

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #23
@moozooh,

No, it's the opposite. Even if there's no audible difference between 16-bit LPCM and 24-bit LPCM, that doesn't prove that mp3s from 16-bit LPCM are audibly identical to those from 24-bit LPCM.

We know for a fact (or I do, because I've checked!) that mp3 encoders happily encode 16-bit dither. This is a waste of bits. It's probably inaudible, but using 24-bits at least prevents this waste.

You might find this old discussion interesting:
Noise-shaping curves and downstream lossy compression

Cheers,
David.

MP3 from 16-bit WAV vs from 32-bit WAV

Reply #24
I believe Destroid's comment ("(...) converting CD material to 32-bit then encoding to MP3 will probably have zero benefit") was of a "by the way" kind.
Yes, it was intended to curb any new readers from making conversions that have no use.

"probably"? How could it possibly?
I would have said, "will always have zero benefit" except I'm never 100% certain of anything, especially in relation to science

The point here is if there is a reason for the OP to convert his 32-bit sources to 16 bits before converting to mp3.
That is something I'll have to test later. I'd to see if higher/lower noise floor affects VBR, but probably be a synthetic sample test as most recordings don't really exploit even 16bit. Worth a try anyway.
"Something bothering you, Mister Spock?"