Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
91
General Audio / Tool to compare 24/94 WAVs ?
Last post by efc78 -
Hello,
Could somebody please recommend to me a tool I can use to compare 24/96 WAVs ?
Exact Audio Copy "Compare WAVs" tool is exactly what I'm looking for but it only works with 16bit/44.1 files, for example if you rip a CD to hard drive and compare the ripped file to the original file before it was burned it will show you samples are the same and let you know there are XXX many samples missing at start of the track, or tell you something like "different samples 1min5sec.345 to 1min11sec.234" etc.
I'm wanting to compare some 24bit/96 files ripped from an old DVD-Audio to the original files before they were burned, as I do not
remember which parts I audio edited (EQ, volume changed etc) 10 years ago!
Thanks for any help.
92
Site Related Discussion / BIG ThankYou to Peter Pawlowski
Last post by AH99 -
Dear Peter Pawlowski, here is my humble and extremely grateful tribute to you and your team for the great work you have done and are still carrying on with.

It was one of those nice evenings at home.
For whatever reason, my wife is fascinated by a piece. She plays it on the piano in its entirety or just parts of it; sometimes fragments only.
Last night, it was Schubert, Impromtu No. 2, A flat major, Deutsch-Verzeichnis 935, Op. posthum 142
-   We started with Edwin Fischers Abbey Road Studios recording from 1938 (Edwin Fischer is always a good place to start from)
-   … moved on to Andras Schiff (my personal all-time favorite)
-   … stayed for a while with Uchidas Philips recording
-   … had a try at Wilhelm Kempff
-   … and then rested at Radu Lupus Decca recording
Always circling between the piano and f2k.

I turned 60 recently.
I began collecting music at the age of 20. Starting analog; soon switching over to digital, first on CDs, followed by rips on HD, NAS, and finally cloud storage.
The collection grew over time (own rips from LPs and CDs/SACDs, bootlegs from concerts I attended, subscriptions to high-res download services … … …). Today, that’s in the vicinity of 20Tb.

Not using meta-tagging meant that I had to have accurate file-path nomenclature to keep the overview.
Solely for the nomenclature of music, I formed an online workgroup (3people, a son of a famous Japanese conductor, a leading figure of today's dominant streaming service, and -last but not least- myself) for specifically that purpose.
The development and, especially, the re-adjustment of nomenclature led initially to a lot of re-work. My first “larger” digitalization effort was the Bach2000 collection, edited by N. Harnoncourt, which I reworked at least 10 times. My family-live and relationships suffered during that time.

Approximately 10 years ago I upgraded my Hifi (now just amp (Accuphase), speakers (Dynaudio), DAC (Accuphase) to be able to handle high-res audio appropriately. That is today still my setup.

F2k features that have most impacted me are

-   F2k is a long term solution
-   Your strict design principles, like the separateness of the player (f2k) and its music library
-   Versatility & high degree of customizability of f2k.
-   Not influencing the “sound” in any way
-   Ability to handle a wide and growing variety of formats
-   Ability to handle very large libraries
-   Continuous growth of user-base
-   Acceptance in high-end hifi circles

and many more

Peter, I want to expressively thank you for your long-term dedication and commitment (may I call it “obsession”?) to f2k.
Be aware that your work has influenced the life of many people, it certainly has influenced mine.

All the best, Andreas


Dr. Andreas Helget
Heusteg 3a
91056 Erlangen
Tel.: +49 162 1733394
Andreas.Helget@gmx.de
www.linkedin.com/in/ah99/
93
Audio Hardware / Re: Aluminum ~ The New Vinyl
Last post by Porcus -
wrote:
Quote
Isn't it natural that any technology moves forward instead of going backwards?

Is that a synthesizer in your pocket or are you just happy to see me?  :D

I'm all for LPs as merchandise. For painting by hand. Mechanical watches. Double basses with kitgut rather than steel strings.
Just don't pretend that these are more than ... you know.

Also I'm all for digital audio, giclée prints and having time at my hand on my mobile phone - and electric bass guitars with or without a grunty fuzz box. Synth bass? Not that there's anything wrong with that!


96
Lossless / Other Codecs / Re: Test in progress: compressing near-quiet sine tones
Last post by Porcus -
Zero residual ... yeah, that is a point. 1 bit per sample is 1/16th is pretty close to the minimum bitrate you see for FLAC in the above chart.
FLAC actually started out encouraging more residual coding methods to be added later. I have a hunch that will never happen ...


The order is not enough to explain much. As for TAK, it uses variable-order predictor, where the default -p2 can use up to 32 if I have understood correctly, so it should then beat ALAC (it doesn't until midrange) - however, that argument isn't foolproof, as TAK trades off order against speed, and could miss cases where higher order would make miracles.

Order 32 with FLAC on these files: Adding "--lax -l 32" to flac -8pe changes all the files, but
* files up to & including 1768 kHz are to the byte same size, and same 3536 and with 10k and up
* the rest become smaller, and most dramatically: the 5 kHz file drops from 278 220 to 200 616.
97
Lossless / Other Codecs / Re: Test in progress: compressing near-quiet sine tones
Last post by Octocontrabass -
This time the purpose was initially to test "wasted MSBs" - I mean, they don't have an "explicit facility for it" so therefore, how do they handle it?
Most of the data in a typical FLAC file is the residual. FLAC encodes the residual with Rice codes, which include a unary portion that indicates the most significant used bit in a depth-agnostic way, so FLAC is largely unaffected by changes in bit depth. ALAC works in much the same way, although ALAC uses adaptive Rice codes instead of the fixed per-partition Rice codes in FLAC.

And since it all turned out to be very sensitive to frequency, I got a totally new rabbit hole around my head.
FLAC and ALAC (and presumably TAK) use a linear predictor, which is a fancy way of saying each sample is predicted by adding up fixed multiples of the most recent samples. FLAC's predictor is constant throughout each block, whereas ALAC's predictor is adaptive and updates its coefficients throughout each block.

FLAC can use up to 32 of the most recent samples for prediction, but subset FLAC at 48kHz or lower is limited to the most recent 12 samples. ALAC can also use up to 32 of the most recent samples; I'm not sure if there are any situations where the encoder might be forced to use fewer.

While digging through FFMPEG to learn more about ALAC, I figured out why its files are so much smaller than FLAC despite being so similar: ALAC can use run-length encoding to compress long runs of zero residual to less than 1 bit per sample, and FLAC can't. (Rice codes are always at least one bit long.) With the 10kHz and 20kHz sines, the signal repeats often enough that the predictor is 100% accurate and the residual is entirely zero, so it's coded as a single long run. For lower frequency sines, there are long stretches of zero residual between each change in amplitude which are also compressed this way.

I'm not sure how TAK encodes the residual. It must be doing something more clever than FLAC if it can achieve less than one bit per sample, but it's tough to guess what it could be...
100
Lossless / Other Codecs / Re: Test in progress: compressing near-quiet sine tones
Last post by kode54 -
Sensitive to frequency may also be because it doesn't have a really great way to handle the LPC needing to be higher time frequency than the actual sample rate to account for the resulting aliasing or anti-aliasing from the signal not lining up perfectly with the samples.

There would therefore be an effective need for a resampler which is entirely in integer space, using a specific algorithm, and any tables would need to be pre-calculated using a consistent encoding process to produce consistent compressed data. This reproducible resampler would be used to calculate intersample values for the waveform, to determine if an intersample LPC could be used at an even multiple of the actual sample rate, to produce a more consistent waveform for the LPC to track, and thus produce a more compressible output file.

The resulting LPC may or may not need the same resampler to downsample its output by the same factor as the upsample used to produce it, and would produce the signal that the final output residue corrects against. It should be determined whether skipping LPC output samples, or downsampling it using the same algorithm as input, produces a better signal that produces a finer residue that compresses better.

Naturally, this process would add extra complexity to the encode process, and should really only be used for highly tonal signals.