Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Downsample 300 gigs of audio (Read 2143 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Downsample 300 gigs of audio

Hi,

I am currently thinking about downsampling all my source audio that is > 48khz. I decided it's just a waste of space. On average this results in 110khz, 300 gigs. Target sample rate will be 48khz.

My flac source is not used for listening, mostly serves as encoding source to lossy, depending on the target device. Often to Apple LC AAC @48 khz, because Chromecast. For car I use mp3 @44.1. mp3 because stupid car does not display embedded art from m4a files.

The point I don't get is bits per sample. The sources all have 24. So will I downsample to 24/48? Or 16/48?

Another thing, what do you think about an untypical in between samplerate - 72 khz, for an unjustified safety feeling?  :)

Thx for your thoughts.

Thx

Re: Downsample 300 gigs of audio

Reply #1
Just down-sample everything to transparent sample rate of your current hearing ability and your current hardware performance. If your FLAC source is not used for listening than just use transparent AAC/Opus lossy encoding. If you really want to butcher and lost original FLAC quality that is your ultimate call at the end. Is bits per sample important? Yes, always.
Please remove my account from this forum.

Re: Downsample 300 gigs of audio

Reply #2
16/48 should be fine . Even 44-16.

Re: Downsample 300 gigs of audio

Reply #3
This thread: https://hydrogenaud.io/index.php/topic,125718.0.html .  WavPack hybrid might be an idea, see in particular dbry from page 2.

If you want to stick to LPCM, then maybe better chances that 64 kHz is supported. But you can upsample to 88.2 on-the-fly.

Re: Downsample 300 gigs of audio

Reply #4
Lossy is definitely not an option, just downsampling flac->flac. Simply because the target files serve as a source for lossy encodings.
All we discuss here is inaudible for me, that's for sure. It's more about still having a safe source, safe from creating tiny problems, artifacts, whatever, after encoding to lossy.
I am now wondering if I want 24 or 16/48.


Re: Downsample 300 gigs of audio

Reply #6
24 bit may be better if your going to edit/process source from what I read. 16 bit is fine for listening and encoding to lossy.

Re: Downsample 300 gigs of audio

Reply #7
72 khz
There are 2 ranges:
44.1  88.2 176.4
48 96 192
I bet most DAC's won't play any other sample rate.

Bit depth: 16 is CD quality. It allows for a dynamic range of 96 dB. To phrase it slightly different, as 0 dBFS is the loudest possible signal, 16 bit can resolve details down to -96 dBFS. 24 bit can resolve even more details as it goes down to -144 dBFS.
In practice -120 if about the max on a highres recording. Today you can even buy power amps with a SNR close to 120 to it is possible to resolve 20 bits but you have to play horribly loud to make it audible.
In practice 16 is sufficient.

Sample rate: the highest frequency possible is 1/2 fs so 22 on a 44 kHz, 24 on a 48, etc.
In excess of our hearing range. Still might have a slight benefit (filters, impulse response) but in practice one is very hard pressed to hear the difference between e.g. 24/96 and a down sampled 16/44.1.

3oo GB? That is nothing. Storage is cheap. Why wasting your time on this?
TheWellTemperedComputer.com

Re: Downsample 300 gigs of audio

Reply #8
3oo GB? That is nothing. Storage is cheap. Why wasting your time on this?
Rough back of envelope calculations suggest > 200 GB saved if going to 48/16, at least with a slight volume reduction that would be needed to prevent clipping. For a spinning drive that's only a few $$s saved on average, and average is of course slightly misleading since it is only a matter on how often you have to buy new drive.

(Anyway, for decimating bit depth for FLAC use, then LossyWAV is would in principle provide an answer - but I don't know if it has settings that guarantee transparency.)

Re: Downsample 300 gigs of audio

Reply #9
300gb is a lot sometimes ,If its on a daily drive mixed with other content.  For offline drives its less of an issue.

Re: Downsample 300 gigs of audio

Reply #10
Quote
The point I don't get is bits per sample. The sources all have 24. So will I downsample to 24/48? Or 16/48?
With uncompressed audio both are proportional* to file size so 24-bits if 50% bigger than 16-bits (ignoring embedded artwork).

With FLAC the approximate proportions remain.    With lossy compression it depends on the compression settings (the resulting bitrate).

* For uncompressed you can calculate bitrate as Sample rate X Bit Depth X Number of channels.   CD audio is 44.1kHz X 16-bitxs X 2 channels = 1411kbps.   There are 8-bits in a byte so you can divide by 8 to get file size in bytes per second (plus any embedded artwork).

Re: Downsample 300 gigs of audio

Reply #11
44- 16 bit is cd-audio & 48-16 a bit more.  Going for these (but not less) isn't bad for 2ch stereo transparent for
playback. They'll be more compatible maybe negating need to transcode to lossy .  With RG the bitrate is reduced to 650..850k.
Might work as good as lossy codecs with digital music players.

Re: Downsample 300 gigs of audio

Reply #12
Hi Squeller,

Like many people who start to consider a way to reduce the size of their lossless audio library, your main hesitation is about safety and peace of mind. That’s normal.
Unfortunately, there’s always a risk that something goes wrong in the process — and then regret your original files.

You’re very clear about the fact that you don’t want any lossy encoder. As you say, “it's more about still having a safe source, safe from creating tiny problems, artifacts, whatever, after encoding to lossy”. I understand it very deeply.

But you should maybe consider the fact that downsampling is also a lossy operation. Of course, people here should perfectly know that as we are not able to hear anything from the ultrasonic noise beyond 20000 Hz. So resampling a HR source to 44100 or 48000 is logically safe from creating even the tiniest problem.

But is totally certain? When I started to think about resampling my useless 192.000 Hz, someone here warns me about the fact that a risk of clipping exists. I didn’t thought about that. So a tiny problem exists with resampling and is well known. Fortunately, it’s easy to fix (by reducing the volume).
There’s also a very slight risk about gapless. I’m pretty sure that no resampler will produce one day a big “pop” between two tracks but I wouldn’t be surprised at all that someone will report one day a slight but audible pop after using a resampler. Isn’t what happened to OPUS by the way? DSD users are also periodically annoyed with gap issue after converting and resampling their files. A fix also exists: it’s to encode the whole album as one file and the cut it in several part. But it’s a boring task. Foobar2000’s converter also have an option to not reset DSP between tracks just because DSP may break sound continuity between two tracks. So keep in mind that FLAC->FLAC is not totally immune to audible issues as long as another lossy process is involved.
It may be just an anecdote but I noticed minor gapless issue once when I tried to upsample my FLAC CD collection to 48 Khz (don't ask for details…).


From what I also understand in your first message you’re not fully convinced yet that 48000 Hz is enough. It’s an irrational feeling but I understand it rather well.

I can’t decide for you. I’m also dealing with big high resolution audio files and I’m also trying to reduce their footprint for various reasons. The most stupid part in my practice is that I don’t even listen to my HR files and I simply convert them for my phone and my car. So it seems we’re sharing many things ;)
My own choice: I have an irrational attachment to high sampling rate so I decided to give a chance to hybrid formats. They are very different from all transform codecs: much less efficient but they also keep much more inaudible information (it's easy to test). I only decided to resample most of my 192 KHz albums to 96 KHz because 90% of them have no useful information beyond 48000Hz. And I’m very happy with the result.
I’ll be fully honest: I haven’t gave up with lossless because I simply can't yet. A part of my collection is now converted in lossy+correction file (which is lossless on playback and converting). So I can still go back to flac if necessary. But from what I heard until now, my high bitrate WavPack lossy file are totally gorgeous (and my irrational side is still very satisfied because my files are still higher resolution than 44100 or 48000 Hz). And when I don't entirely enjoy an album I simply delete the correction files.
Average bitrate for 96000Hz files: ~800 kbps with WavPack (4 bit per sample). But you can stay in the FLAC ecosystem with LossyWAV (it's ~700 kbps with default setting at 96KHz).

High bitrate encoding is not immune against tiny issues. But after all it's the same for resampling your original files to lower resolution file. So choose your camp  :)


Wavpack Hybrid: one encoder for all scenarios
WavPack -c4.5hx6 (44100Hz & 48000Hz) ≈ 390 kbps + correction file
WavPack -c4hx6 (96000Hz) ≈ 768 kbps + correction file
WavPack -h (SACD & DSD) ≈ 2400 kbps at 2.8224 MHz

Re: Downsample 300 gigs of audio

Reply #13
Btw I thought this is quite a reasonable article (in german):
https://digital-audio-systems.com/sinn-und-unsinn-von-hohen-sampleraten
Um... I'd be careful. Some questionable parts (not saying that's all):
Quote
Generell geht man davon aus, dass das Amplituden-Auflösungsvermögen des menschlichen Gehörs über 1 Mio. Abstufungen zulässt, so dass es noch Lautstärkeunterschiede von 0,0001dB unterscheiden kann, was in etwa der Größenordnung eines 20-22Bit Digital-Systems entsprechen würde.

Google Translate:
In general, it is assumed that the amplitude resolution of the human ear allows for over 1 million gradations, so that it can still distinguish volume differences of 0.0001 dB, which would correspond roughly to the order of magnitude of a 20-22 bit digital system.
Quote
wobei meiner Erfahrung nach auch 20Bit-Aufnahmen deutlich besser klingen als 16Bit-Aufnahmen.

Google Translate:
In my experience, 20-bit recordings also sound significantly better than 16-bit recordings.

Re: Downsample 300 gigs of audio

Reply #14
@danadam yes, some questionable statements, I not that, but they don't do harm to me 😉

@guruboolez Thanks for your beautiful posting. re: "also lossy process" - I understand that very well, but here's a rather simple process with no psychoacoustic algos in place. Better for my peace of mind. Clipping, yeah, someone mentioned it. So you put some volume decreasing dsp into the converter chain? Isn't that a tough strike against the peace of mind? 😂

Anecdote: Yesterday I thought, "aww lets go, f it", then ran the converter, deleted the originals. In the evening evening the uncertainty took over... I reverted the process via a restore from backup  ;D

I will probably end up just downsampling the 196/192 kHz ones to 96/88.1. Not winning to much disk space.

Re: disk space, yeah cheap, but I'm just running away from the process to reorg/replace the 5tb hdd, all the involved copy actions will be time consuming. The day will come.

LossyWav, WVlossy, need to look again. Some while ago I decided to stay away from the less common formats (eg my source was tak once). Reasons. Benefit etc.

Re: Downsample 300 gigs of audio

Reply #15
I will probably end up just downsampling the 196/192 kHz ones to 96/88.1. Not winning to much disk space.
44.1 kHz and 16-bit still sounds good to my ears. 96 kHz and 24-bit is just a waste of space and without higher sound quality (at least for me). Maybe you will hear the difference between 44.1 kHz/16-bit and 96 kHz/24-bit!

Re: Downsample 300 gigs of audio

Reply #16
@Squeller : The hybrid formats have no psychoacoustics involved (at least WV). And encoding could be faster than some resampling process. My bet is that they have similar complexity (at least both don't transform the signal as psychoacoustic format do).

You can also stay with FLAC format through lossyWAV. The resulting files are as compatible than a normal FLAC encoding. But it might not be the best choice for preserving your peace of mind: LossyWAV is not tuned IIRC for high resolution. But I would be really confident honestly especially with non-agressive settings. WavPack is a bit less advanced/sophisticated from what I understand and eats almost anything.

Let's recap:

downsampling and keep lossless
  • pro: compatibility
  • pro: no artifacts
  • con: still heavy with 24bit
  • con: minor risks of tiny issues (gapless, or resampler bug in extreme situation)
  • con: technical properties are so 1990' (44.1 & 48 KHz) [but it's a stoopid argument]
  • transcoding: excellent source quality
hybrid format like LossyWAV & WavPack
  • pro: compatibility (at least with FLAC)
  • pro: keep high resolution and properties forever
  • pro: better space saving
  • con: minor risks of tiny issues (known consequence: minor hiss but not audible at default setting)
  • transcoding: excellent source quality as well
Wavpack Hybrid: one encoder for all scenarios
WavPack -c4.5hx6 (44100Hz & 48000Hz) ≈ 390 kbps + correction file
WavPack -c4hx6 (96000Hz) ≈ 768 kbps + correction file
WavPack -h (SACD & DSD) ≈ 2400 kbps at 2.8224 MHz

Re: Downsample 300 gigs of audio

Reply #17
Hi,

I am currently thinking about downsampling all my source audio that is > 48khz. I decided it's just a waste of space. On average this results in 110khz, 300 gigs. Target sample rate will be 48khz.

My flac source is not used for listening, mostly serves as encoding source to lossy, depending on the target device. Often to Apple LC AAC @48 khz, because Chromecast. For car I use mp3 @44.1. mp3 because stupid car does not display embedded art from m4a files.

The point I don't get is bits per sample. The sources all have 24. So will I downsample to 24/48? Or 16/48?

Another thing, what do you think about an untypical in between samplerate - 72 khz, for an unjustified safety feeling?  :)

Thx for your thoughts.

Thx

IMHO it's not worth the effort.
300GB of disc space is far cheaper compared to the time you'll spend of downsampling.
Next day you'' have second thoughts on that and it's all gone.
Just buy a larger HD or delete things you didn't hear for more then 24 months as you'll probably not going to.
Just my 2 cents.


Re: Downsample 300 gigs of audio

Reply #19
Wavpack looks like the future.  Very interesting in these scenarios .  24/96 WAV to WV -b4  will be around 800k/96hkz.

Re: Downsample 300 gigs of audio

Reply #20
But is totally certain? When I started to think about resampling my useless 192.000 Hz, someone here warns me about the fact that a risk of clipping exists. I didn’t thought about that. So a tiny problem exists with resampling and is well known. Fortunately, it’s easy to fix (by reducing the volume).

In a lossy conversion scenario (so this is a bit Offtopic. Say, to encoding to AAC @44.1 or 48 khz) what would be the technically best way to treat clipping? With the help of the TruePeak-Scanner, I noticed tracks that already have clipping present in their flac instance. Say, 150 clipped samples a track. Post-conversion there may be 200.

How to react with the converter? a) There's the "Amplify" dsp which could just do some 0.x db decrease of volume - but IMO this is not optimal, because it would to that on every track.
b) "Replaygain alternative" DSP with "Only prevent clipping" - "Max peak 0dbtp" (did not remove clipping), so "Max peak -1dbtp" (already a lot)?

And if DSP: before or after SoX? EDIT: I think post-Resample with "0 dbtp, compress peaks" is a good way.

Thx

Re: Downsample 300 gigs of audio

Reply #21
If you are going for a "lossy codec that works in floating-point" (so that the format itself won't do any clipping), I guess the best is likely to just keep the old true peak RG tags? There will be some more overshoot, but those are lossy artefacts anyway.

For the kind of conversion you started this thread about, where the target format is integer PCM, then you just reduce volume in advance to protect against clipping. Just for the sake of a simple numerical example: say you target 18 bits. Scale everything by 0.5, and you have the same resolution in 19 bits, so decimate to 19 instead.
While a "19-bit" WAV file won't be smaller than a 24-bit WAV file, FLAC (and WavPack and TAK) will handle it so you don't need to worry that 19 is a non-standard number.

SoX can warn you about clipping, but I don't know if there is any easy way to script it in a way so you don't have to manually review.


Anyway ... I would still keep the collection as is.
Or WavPack -c it, keep the correction files on the Big Bad Spinning Drive and the .wv files on your everyday portable or whatever.

Re: Downsample 300 gigs of audio

Reply #22
If you are going for a "lossy codec that works in floating-point"
Yes, now just this lossy conversion scenario. LC AAC files for my various target devices. I guess the clipping we talk about is quite irrelevant because this is mostly for every day casual listening, but if there's a state of the art way to go, why not go this way. If we can make things complicated, why not do it? :-D

As for the initial topic, I just downsampled the very high sample rates to either 88.2 or 96. Will not further downsample, sooner or later need new disks. I looked at wavpack hybrid, but I cannot invest time in knowledge and concepts there. Staying with my simple flac infrastructure.

 

Re: Downsample 300 gigs of audio

Reply #23
Floating-point formats can exceed "0 dB" without clipping.
To prevent it from clipping when you feed it to a device that will convert to a DACable integer format, use a digital volume control, which is what ReplayGain does.