Skip to main content

Recent Posts

Support - (fb2k) / Re: Hanging at close
Last post by Case -
I recommend using  Process Explorer to check thread activity. Double click on foobar2000.exe in the main view to open properties, then navigate to the Threads tab. The top-most thread there should point the reason for the hang. You can take a screenshot of what it shows and also check the stack of the top thread.
3rd Party Plugins - (fb2k) / Re: Columns UI
Last post by always.beta -
The taskbar previews may be changed to cover instead of the interface thumbnail?  Or add a toggle option?
Support - (fb2k) / Hanging at close
Last post by devinthedude57 -
Foobar2000 (v1.3.15) recently began freezing when I try to close the program. Windows just tells me it's "not responding", and I have to wait 5-10 minutes for it to force the shutdown. This just began happening a couple of days ago, and I have tried to eliminate possible culprits, but so far to no avail. I'm starting to get annoyed with this, so I'll provide any information you guys need to help me find and fix this problem.
FLAC / what is the maximum value of MSB in the Residual?
Last post by yytang -
Residual values
Encoding individual residual values to Rice coding requires only the Rice parameter and the
values themselves. First, one must convert any negative values to positive by multiplying it
by -1, subtracting 1 and prepending a 1 bit. If the value is already positive, prepend a 0 bit
instead. Next, we split out new value into most signi cant bits (MSB) and least signi cant
bits (LSB) where the length of the LSB is equal to the Rice parameter and MSB contains
the remaining bits. The MSB value is written unary encoded, whereas the LSB is written
so what is the maximum value of MSB in the Residual?
Interesting possible improvement though not a statistically significant result.

I've wondered, with the 1.2 push towards high bandwidth at lower bitrates and higher fidelity in the HF, whether some of these developments may be overfitting a model to data from a few helpful HA listeners.

The tests we see encode samples of professionally mastered studio recorded audio and have them rated by "golden ears" listeners using high-quality setups in ideal silent listening conditions. But won't high frequency content and stereo separation matter considerably less to listeners when the original recording was noisy, their equipment and their hearing are average rather than stellar, or their listening environment has background noise? Won't the low frequencies typically be less masked? And aren't those use cases especially important for WebRTC etc?

A couple years ago I subjected a few laypeople I know to some blind listening tests, seeing what they thought of different encodings of academic conference presentations recorded with handheld recorders, mostly to decide on bitrates for MP3 and Opus. I was surprised to find they generally preferred the LP7.5 .wav to the originals! Of course that's unthinkable for the material used in most codec tests, but for these, even after intelligent postprocessing, removing HF noise more than made up for what HF signal was removed.
My previous post wasn't any kind of explanation for why it doesn't work for you. It was a simple statement of fact saying that the component works in exactly the same way when a new track begins.

Also, it works fine for me.... this gif is quite large though (12MB)..
Is in error when the PCM samplerate is 88.2k?
What do you mean?

What happens with the old and new algorithms when you take the samplerate up so that a good portion of the noise shaping is still present?
I ran the scan with the original 2Bdecided algorithm and the new ITU one on a few different sample rates. The results are rather strange:

Sample rateITUOriginal
352.8 kHz-3.99 dB-1.98 dB
192 kHz-0.96 dB-3.74 dB
48 kHz-1.47 dB-1.98 dB
44.1 kHz-1.47 dB-2.07 dB

Especially curious how the 192 kHz version gets treated so differently.

Edit: wrong sample rate had slipped in the table.
The code inside both components is identical for on_playback_new_track.
Embarrassed,  :-[ ,forgive my poor English. I mean, after modify the code as you said, when I play a new track, still can not automatically refresh the cover, need to manually refresh or about 20 seconds automatically display the cover of the new track.
Is in error when the PCM samplerate is 88.2k?

What happens with the old and new algorithms when you take the samplerate up so that a good portion of the noise shaping is still present?

Where might I find that DSD file to try my procedure on?
I happened to find it with SACD and DSD Google searches when I was looking for DSD test files to play with. Its legality might be an issue.

I used poor wording in my original post, I did see how you performed your test but I used a different method that matched OP's ABX trial.

Converting the previous Pink Floyd DSD file to 96 kHz PCM leaves majority of the noise shaping noise out. RMS power for this version of the file is -22.68 dB. High passed to 22 kHz shows -61.99 dB for the ultrasonics. 22 kHz lowpass gives the same RMS power as the full file, -22.68 dB. And for Greynol's information, ITU 1770 loudness scanner gives the same loudness value for the full 96 kHz track and the lowpassed one.