Skip to main content
Topic: 24bit 96khz to 24bit 48khz conversion (Read 353 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

24bit 96khz to 24bit 48khz conversion

Bit-depth would be the same. The source is in 24/96 Flac.

1) Do I need to dither as well?

2) Should I do anything more to prevent the clipping or any negative effect?


Re: 24bit 96khz to 24bit 48khz conversion

Reply #1
You don't need to dither if your destination is 24 bit. It's up to you if you want to use normalization (e.g. replaygain). Clipping is unlikely to matter.

Re: 24bit 96khz to 24bit 48khz conversion

Reply #2
Correct me if and when needed, anyone:

* Not only "need to", right? For this resampling operation, when one keeps the number of bits, best practice would be to avoid dithering? (Sure a 24-bit signal must have a lot of wasted most-significant bits for headroom in order for dither to matter audibly, but there is no reason whatsoever to dither?)
* If one does not want to keep the original signal anyway, and it is a recording that isn't close to peaking out - e.g. one digitized to 24 bits to be safe with headroom - then FLAC is fairly good at compressing away wasted LSBs, more than wasted MSBs, so one could just as well normalize?
* Nitpickery: is it "better practice" to normalize based on a "True peak" scan on the 96 original, or on the resampled one?
“It sounded bad to me. Digital. They have digital. What is digital? And it’s very complicated, you have to be Albert Einstein to figure it out.”
- Donald Trump, May 2017

Re: 24bit 96khz to 24bit 48khz conversion

Reply #3
Theoretically, dither is needed because the word length is increased during resampling to 32-bit or greater, but the quantization error is so tiny that it makes no practical difference. The only reason to avoid dithering I can think of is if passing through clips that are mono, contain long passages of absolute silence or may already be at the target sampling rate, in which case dither will inflate the bitrate after compression.

If gain is increased during normalization, FLAC will receive more least-significant bits filled with noise, and the bitrate will increase. No "wasted" LSBs exist yet without deliberate word length reduction, to some arbitrary size, for example, 20-bit. FLAC is good at compressing quiet signal with unused MSBs.

You would normalize the resampled signal. I'd expect "true" (upsampled) peak scans to be close, unless the input contained extreme ultrasonic content that was removed during resampling.

 
SimplePortal 1.0.0 RC1 © 2008-2018