Skip to main content

Recent Posts

I have a music folder on my laptop, and another one on an external drive. When I go to preferences>media library and add both folders, all albums from both locations are listed together under "all music". Is it possible to split the album list so it shows the C:/ and E:/ folders as separate sections?
the specific model I am looking information about were not reviewed.
May you have reliable information sources about UIEM/CIEM to suggest?

About all IEM/CIEM available in the world ?
You should mention the "specific model" IMHO
The encoding speed improvement was confirmed on my Ryzen 5 1600X. The real output bitrate was strange when the input was a short stereo white noise file. It was not so on other files.
Yes, the bitrates aren't very predictable, I think I need to develop some proper psymodel, that should improve encoding in general and possibly also the bitrates. We'll see.
Why the foobar's parametres "--mpeg-vers 4 --q 500 -w - -o %d" (working in FAAC 1.28) doesn't work in FAAC 1.29.6?
That may be related to a recent change not to overwrite existing files by default. --overwite option enables it.
Why the foobar's parametres "--mpeg-vers 4 --q 500 -w - -o %d" (working in FAAC 1.28) doesn't work in FAAC 1.29.6?
You've probably seen this already, but since others may stumble on the subject for the first time. I find these videos really relevant:

And for those who want some reading material:

And there's 2 wiki's associated with the videos above:
Hi all,
First, I know a million discussions addressing these questions probably exist, both on here and on other sites, but as a curious hobbyist sound designer/musician, I like to know what people think of various hot topics in the digital realm. I've always enjoyed the critical attitude here so I look forward to discussion. It's a long post since I have a lot of thoughts. They're just my musings though, I don't really expect them all to be responded to in one swoop.

First topic: What do you see as advantages when using very high bit depth and sampling rate, say, above 24 bit/48 K for digital audio? A friend of mine asked me about things like HDTracks or HDMusic or something a few years ago, and ever since then I've been thinking about it when I produce audio. I have two completely separate viewpoints on this, one from a sound design aspect and one from a listening aspect.

The reasons I like to use higher bit depths is perhaps obvious, because of greater dynamic range. Though I read somewhere that going beyond 24 bit was senseless since it already offers more dynamic range than most high-level equipment can provide, and human hearing isn't able to go beyond that either. I don't have any hard evidence to support this though, but such statements do seem logical to me. 6dB per bit*24 bits = 144dB which is roughly the range of human hearing from what I remember. So the only real reason to go beyond is for special case stuff I would think, but I don't know any examples of that where 32 or more bit would be beneficial. Is this a reasonably well-founded conclusion? Are there still advantages above 24 bit?

Higher sampling rates seem more useful to me. Most mid range digital audio recorders like Olympus, zoom, edirol etc. can go up to 96 K. Provided they can actually pick up frequencies up there and you're using mics that can take it, you can pitch those recordings down by an octave or more and the sounds won't audibly degrade in quality because inaudible frequencies, if they exist, are being transposed into the audible range. This seems useful for sound design, and the scope of effectiveness increases the higher the original sampling rate. If you can capture at 192 for example, and still get representative material in those really high frequencies, then transposing by 2 octaves down could still be very sonically useful, and you could go a lot further down if audible highs are not of utmost concern.

Using higher sampling rates, and higher bit depths, seems also useful for oversampling purposes in DSP. The rounding errors in such processes would be less audible which I think would mostly mean less aliasing or other similar distortion, and you could then downsample the end result of the more precise processing instead of using a low sampling rate/bit depth to begin with. I do this in Reaper pretty often, with 96 K being my baseline, and downsampling to 44 after I render. If I work natively at 44 K, I can hear more aliasing in some processes like pitch shifters, virtual analog synths, distortion effects, samplers etc. Some processes don't seem to be designed to work at higher rates though so they might break. I have several that illustrate this.

I discovered, mostly by accident, that using higher sampling rates improves the sound of aggressive non-psychoacoustic compression like ADPCM. The 1 bit DPCM format used on the NES for its low quality samples sounds much better if oversampled. If used at 176 K, that 1 bit of audio actually sounds pretty good, considering how cruddy it sounds at the sampling rates it's intended for! Of course size is the tradeoff here, and it couldn't be practically used, but the nerd in me still can't resist being fascinated.

The reasons why I can see HD audio being nonsense would be if you're not really into sound design or some other form of high level creation, and just want to listen to music as it's intended. While I can understand reasons that people prefer lossless compression, I don't know the rationale behind using anything higher than 24/48 or even 16/48 for music listening. Where it gets really interesting is with psychoacoustic encoding. AAC, WMA and Vorbis can all do 96 K or beyond, though they end up lowpassing and doing weird things in the highs which is sometimes audible if you lower the pitch. Why a psychoacoustic codec would offer sampling rates above 48 K I don't really know, since I can barely hear a 20 K sine wave, and don't know anyone who I know can hear higher. A rate of 44.1 K is quite sufficient to reproduce that. A similar concept can be discussed with dynamics, if you have an encoder that supports 24 bit input and a decoder that can decode in 24 bit. I only tested this with Vorbis a long time ago, but quality starts to snowball down and out of control below a certain point, and I think by around -110 dB, there's hardly anything at all to be heard. Of course I had to normalize the volume to hear this. I do recall reading threads though where various lossy formats were being criticized for producing artifacts in quiet but still audible parts, so that line between saving and wasting space is something I don't know but I've always found interesting. I guess it has some subjectivity to it but there's also hard science behind it too.

BTW I realize I'm making a lot of statements without proof and that is against the rules here. I am prepared to back up most if not all my statements, but I really am not certain about the extent of proof needed or what I will need to prove, so I will leave that open for discussion and be open to direction.

Going off that topic... The other thing I've been thinking about lately is, the whole analog vs digital debate. How well-founded are criticisms that digital audio sounds different from analog? There are times when I believe I can perceive a warmth in analog that I don't in digital, but I highly doubt I could identify which was which. I was never a huge fan of working with analog stuff because I'm not old enough to have truly appreciated it. I'm more than happy to digitize any analog media I have made/might obtain in future. I prefer getting digital copies directly so I think I'm safest saying I just don't perceive a difference, or at the very least, prefer digital for various reasons. I wonder if what most people claim to hear as a difference between the two media is just different masters and versions, and or equipment used. That's been my theory ever since I heard about the controversy, but I've always wondered if there was more to it, if any controlled tests were done to put some sort of closure on whether there was a point in using analog over digital or having hd music instead of normal.

Again, sorry for the long-winded post, but both of these things have been on my mind a lot lately.
Audio Hardware / Universal and Custom IEM information sources?
Last post by lélé -

After almost twelve years of service, I will replace my Westone UM2 IEMs.

I am hesitating between universal and custom IEMs and found contradictory information about comfort (understandable), isolation and sound quality.

Moreover, when it comes to choose a specific model I struggle to find reliable information among audiophiles digressions.
I red several posts on Chin Roi blog ( but unfortunately he left the place three years ago and the specific model I am looking information about were not reviewed.

May you have reliable information sources about UIEM/CIEM to suggest?


3rd Party Plugins - (fb2k) / Re: Linear Phase Subwoofer
Last post by hatrix -
New update is great. Definitely the best plugin for subwoofer management. Delay is working fantastic.
General Audio / Parametric EQ and Crossfeed for Android?
Last post by lélé -

I use my Android smartphone and IEM to listen to music from streaming platforms (Qobuz, Spotify and I may give a try to Tidal soon).

I am looking for Android application with:
  • Parametric EQ
  • Crossfeed
that I could use while listening from the streaming platforms mentioned above.

May you have suggestions?

Merci !