Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
1
Support - (fb2k) / Re: Resampler dbpoweramp/SSRC and RetroArch do not null, why?
Last post by .halverhahn -
Update:

This happens when I'm resampling 20x times Source (192kHz)  to 48kHz, 96kHz, 48kHz, 96kHz....

Nulling the first resample against the 20x resample (used the same resampler).

dbpoweramp/SSRC looks awfull.

Used resamplers:
foobar2000 2.1.5 Resampler dbpoweramp/SSRC
foo_dsp_resampler 0.8.7+ (SoX)

Please see screenshots
2
Listening Tests / Re: Personal blind sound quality comparison of xHE-AAC, Ogg Vorbis, and TSAC
Last post by 2012 -
but they don't scale to transparency... at least not in 2024.

They don't target transparency, which is why I think it would be more useful if:

* Samples used in tests are not handpicked for being challenging, but picked for being representative of average everyday use (talk show, pop music, ..etc).

* Classical codecs are tested at a much lower bitrate than in this test. Maybe a something in the range of 32-48kbps.
4
3rd Party Plugins - (fb2k) / Re: External Tags
Last post by Case -
The tags are binary and not meant to be edited outside the component. But of course if you know what you are doing that's fine. You should use hex-editor as regular text editor can mutilate the binary data. If the old name and new name had same length the edit is simple. If the name length changes the four bytes before the name will need to be fixed to specify the new length of the name. The strings are not null terminated, next bytes in the file specify the subsong number.
5
Support - (fb2k) / Resampler dbpoweramp/SSRC and RetroArch do not null, why?
Last post by .halverhahn -
I've tested various resampler and some strange thing happened.
When I null the resampled file against the original, dbpoweramp/SSRC and RetroArch do not null.

Why dbpoweramp/SSRC and RetroArch do not null (0-24kHz), but all other resamplers do?

Used Resamplers:

foobar2000 2.1.5 Resampler (dbpoweramp/SSRC, RetroArch)
foo_dsp_resampler 0.8.7+ (SoX)
foo_dsp_src_resampler 1.0.14 (SRC - Secret Rabbit Code)
r8brain 2.10 free
Audition 3.0
ffmpeg 5.1.2

Please see screenshots.

7
Listening Tests / Re: Personal blind sound quality comparison of xHE-AAC, Ogg Vorbis, and TSAC
Last post by C.R.Helmrich -
Thanks a lot, Kamedo2, for your meticulous listening! I know how much work blind listening tests at these bitrates are. Some distilled observations and connections with previous discussions on HA:

  • In your earlier (09/2020) tests (https://hydrogenaud.io/index.php/topic,119861.0.html), exhale 1.0.6 reached a mean score of "only" 4.47 on the same 15 samples. It's nice to have some evidence now that exhale's higher-rate audio quality improved some more during the last 3-4 years.
  • Regarding my score estimator (https://hydrogenaud.io/index.php/topic,118888.msg989150.html#msg989150), your average results from yesterday are now above my estimate, similar to IgorC' 2020 results at 192 kbps. Maybe I can finally update my score estimator at the higher rates :)
  • AFAIR, this is the first blind test in which a machine learned end-to-end codec is compared against high-performance classical audio codecs. This is very valuable information since it puts all these "AI codecs outperform MP3" hyped claims into perspective - machine learned codecs deliver decent audio quality, but they don't scale to transparency... at least not in 2024.

Chris
10
Lossless / Other Codecs / Re: HALAC (High Availability Lossless Audio Compression)
Last post by Hakan Abbas -
HALAC's Encode speed is slightly better than the decode speed (I mentioned the little improvement of V.0.2.7 at the encode speed of my previous description). In fact, this situation is normal in HALAC. Because since the Encode speed is extremely fast, the decode speed seems to be behind. But not like that.

The encode process takes big data and compresses it to make it small. The decode process, on the other hand, takes compressed data and produces a larger data output. This is a disadvantage. The other problem is dependency. There is usually a dependency on the previous data in the decode process. One code cannot be passed to another without being decoded. In other words, some operations cannot be parallelized. Therefore, a bottleneck may occur at this stage.

Some codecs(especially image codecs) try to relieve the decode stage by performing more operations during the encode stage. In other words, they offer most things ready-made to the decoder. Because decode speed is more important for them. This approach also helps to increase the compression ratio, as more operations can be performed at the encode stage. In other words, more possibilities and situations can be evaluated. And some approaches, such as content modeling, can also be exhibited.

This kind of approach can also be exhibited for HALAC. In the "-high" mode, maybe we can see something like this. But I think this time it will be no different from other codecs. My goal is to make as few concessions to speed as possible.