Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
22
CUETools / CRC w/o null
Last post by mathmat -
I was wondering if anyone knew how Cuetools knows which samples are null when it calculates a "CRC32 w/o null"?

Basically what I am trying to do is put a wav file in a hex editor, manually remove the null samples, and then do a CRC32 check to see if it matches the one given by Cuetools.

Is this possible at all?

Also, is a null sample always 4 bytes (left and right channel)? Or can it be just 2 bytes (left or right null while the other is not null)?

This is an entirely academic exercise. Im very curious as to how it all works.

Thanks!

24
3rd Party Plugins - (fb2k) / Re: ReplayGain DSP - Alternative ReplayGain implementation by Case
Last post by Case -
Good to hear everything works now. And yeah, makes perfect sense to use the fast DSP option if you have DSPs that are so demanding to initialize.

@Defender : I just uploaded a new version of ReplayGain DSP component. This component was always supposed to use previous gain value exactly to help against for example radio stations suddenly blasting your ears off. But at some point with new features getting added the previous gain applying didn't work when any auto-RG scanning option was in use. I think this option alone is quite useful: you play for example a metal track with -14 dB gain, switch to a metal radio, your playback continues with -14 dB gain.

And I made another improvement. The pre-track quick RG scan will now start its timing after opening the source track so the process won't be canceled while waiting for the connection to be established. And it will automatically allow extra two seconds for radio streams to get a loudness estimate.

Perhaps these changes help you get more comfortable radio loudness levels.
25
CD Hardware/Software / Re: Figuring out an DISC's offset ?
Last post by korth -
Just checking.
Next question: CUE file.
Is there a reason SESSION 01 has to be BINARY? (perhaps this method preserves the exact start position of the data track, IDK)
SESSION 02 has to be BINARY, but why don't you store SESSION 01 as FLAC?
27
3rd Party Plugins - (fb2k) / Re: External Tags
Last post by Case -
I updated the True Peak Scanner preview version to detect streams and cleanly exit out after 10 seconds of running.

If you are going to tag hundreds of streams with this I suggest doing some preparations first, as we noticed that stream tagging may not always get handled by the fallback tag writing support.

To make sure you can tag the streams, do the following:
Hold shift+right click, then selected 'Tagging' -> 'Create External Tags'.
Now shift+right the items again and select 'Tagging' -> 'Wrap for external tags'. This changes the urls to include a protocol that forces them to be handled by External Tags.
Now you can scan them with the above linked True Peak scanner and tag writing won't error out.
If you don't like the wrapping to be "permanent", you can create a new temporary playlist of the items and do the operations there.
28
Lossless / Other Codecs / Re: Monkey's Audio 11.00 (MULTI-THREADED ENCODING / DECODING)
Last post by Porcus -
Disclaimer: I don't use Monkey's for anything but testing, so my knowledge might be a bit off.


At least 95% of the total processing time in codecs is spent in stages such as prediction, filtering, hashing, entropy and encoding/decoding. Those who develop codecs know this very well.
I assume you have been working specifically with lossy codecs that are not suited for audio, but better data are known for lossless audio codecs. For example
FLAC profiling: https://hydrogenaud.io/index.php/topic,127409.msg1059257.html#msg1059257
The impact of audio checksumming: https://hydrogenaud.io/index.php/topic,123786.0.html
... and speed of fast verification: checksumming on the encoded stream: https://hydrogenaud.io/index.php/topic,125791.msg1042825.html (More formats do offer checksums on the encode and could do this fast verification - FLAC has block-level checksums, for example, but since FLAC is so fast decoding, the value added is much less than for slower codecs like Monkey's and OptimFROG.)

These stages should already be present in HALAC. The remaining tiny time can be spent on other details. And a maximum of 5% speed loss required for these will not leave HALAC behind in any way. It tries to combine reasonable compression ratio with excellent speed balance. Maybe we will see more in the future. There hasn't been much movement or innovation in lossless audio for more than 20 years. At least we should thank HALAC for this.
I agree. But it targets a different niche. FLAC does this, Monkey's does that. FLAC suits my needs better, so I use FLAC. And a little WavPack when I need it. Not HALAC, as it isn't mature, but some performance figures do impress me.

I also now made tests with lossy audios produced with lossyWav (my favorite). However, Monkeys gave much worse results than the others.
Of course it does, and it is very well known - see the "Hybrid/lossy" row of https://wiki.hydrogenaud.io/index.php?title=Lossless_comparison#Comparison_Table and https://wiki.hydrogenaud.io/index.php?title=LossyWAV#Codec_compatibility

But neither Monkey's nor FLAC were designed for lossy audio. Indeed, Josh Coalson listed "Lossy-compression" as one of the "Anti-goals" for FLAC.
Still, LossyWAV was (originally) designed for FLAC out of the idea to exploit a couple of properties of this format. Of course, that is one of the perks of FLOSS: once it is out in the free, you can exploit it even beyond what it was intended for.
30
Lossless / Other Codecs / Re: Monkey's Audio 11.00 (MULTI-THREADED ENCODING / DECODING)
Last post by genuine -
To be honest, I didn't like the truck and motorcycle analogy. As someone who has been working specifically with video codecs for years, I can easily say this. At least 95% of the total processing time in codecs is spent in stages such as prediction, filtering, hashing, entropy and encoding/decoding. Those who develop codecs know this very well.

These stages should already be present in HALAC. The remaining tiny time can be spent on other details. And a maximum of 5% speed loss required for these will not leave HALAC behind in any way. It tries to combine reasonable compression ratio with excellent speed balance. Maybe we will see more in the future. There hasn't been much movement or innovation in lossless audio for more than 20 years. At least we should thank HALAC for this.

I also now made tests with lossy audios produced with lossyWav (my favorite). However, Monkeys gave much worse results than the others. Even though I tried different parameters, the results did not change much.