Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Recent Posts
1
CD Hardware/Software / No Extended track information or Year info, in MusicBrainz plugin, for EAC v.1.6
Last post by Krusty The Clown -
When I'm using the MusicBrainz plugin for EAC v.1.6, I have never got "Extended track information" for any CD's.
In the beginning, I though I were just unlucky that the CD I were trying to rip, just haven't been added any "Extended track information". But now some years using EAC, and many popular CD's (AC/DC, Underworld ect.) and still no "Extended track information", I can only think of two things going on.... 1. Either the MusicBrainz system only has general Artist, and title (year are btw also missing on every rip I try to do), plus track titles. 2. I have set up EAC a bit wrong, and MusicBrainz should provide me with all info (year + "Extended track information" ect.) if my EAC have been corrected.
Thanks for help.
2
MP3 / Re: LAME -V3 vs -V4?
Last post by shadowking -
Those settings usually sound very good or even perfect.
Personally, I see it like this;
With todays storage no reason to go lower than V3 or V2.5 when
you want a smaller file size. 

For me  it takes -V1 ~ 224k to get a robust all round quality.  Also -V2 --vbr-old ~ 190k
works well for some reason. 

Given this, Its not a issue to go -V1 as its still lower filesize than the standard 256 / 320 .
At higher bitrates (200+) I am not sure the files are so 'disposable' . I think It can work even
as one archive.
4
MP3 / Re: LAME -V3 vs -V4?
Last post by Porcus -
There is a reason not to choose a low "quality" even if one cannot hear the difference in testing: you test ten files, you convert a thousand, there is quite some chance that one of the 990 is a bit different.

The solution is simple: keep the originals. Then you haven't once and for all "chosen" the low bitrate. Consider the lossy files disposable. If one of the 990 files sound bad ... replace it. If enough of them do ... run the whole procedure at a higher bitrate. 

Digital music stores can have all sorts of reasons to choose 256/320 kbit/s - high numbers look good and therefore they sound good to customers who hear with their eyes - but one very good reason to go overkill is that you don't want to pay a big listening test panel to find a transparency level for each file.
5
3rd Party Plugins - (fb2k) / Re: Eole, a SMP/ColumnUI theme
Last post by Reagan -
Whenever I press "Play randomly," I can only filter by tracks, artist, and genre. Whenever I click album, I get this crash screen from the Monkey Panel thing (I'm not very fluent with foobar).

This is the response I get:
Error: Spider Monkey Panel v1.6.1 (CoverPanel: CoverPanel v1.2.3b20 by Ottodix)
RunContextCommandWithMetadb failed:
handle argument is invalid

File: WSHcoverpanel.js
Line: 984, Column: 7
Stack trace:
  on_mouse_rbtn_up@WSHcoverpanel.js:984:7

Also I'm not completely sure if this is a problem with the theme or something else... I apologize if it's not.
6
Lossless / Other Codecs / Re: Lossless codec comparison - part 3: CDDA (May '22)
Last post by bryant -
@ktf  I would like to echo the others and shout out a huge thank you for these tests; it is great to have such a complete and up-to-date resource available! In particular I appreciate your taking seriously my suggestion to represent WavPack as essentially two separate codecs (it certainly does make the results easier to understand).

Quote
* In line with the previously posted results, WavPack -x4h and -x4hh decode faster (if only slightly so) than -h and -hh. Less data to wvunpack?
Considering WavPack's -x4 modes decompressing faster, @bryant was surprised so see that too. Because of that, I won't dare give an hypothesis :)

I have now verified what’s going on with these. Without the extra mode, WavPack makes a fixed number of decorrelation passes over the audio, and this number increases with mode (e.g., "very high" is 16 passes). The -x4 mode uses a recursive search method on each frame of audio to generate an optimum sequence of filters up to the maximum number of passes for that mode. Usually each additional pass will improve the results, but in some cases we reach a plateau where an additional pass provides no improvement. In those cases we terminate the process and the resulting frame will decode with fewer decorrelation passes.

One obvious place this occurs is compressing silence (where any decorrelation is wasteful) and this explains why this effect is more pronounced in the previous multichannel test where long runs of silence in some tracks is probably common. In the CDDA test the effect is less pronounced, but a quick check showed it greater in classical and solo instrumental tracks, and particularly on the Jeroen van Veen solo piano album. I had something similar handy and tried compressing that with -hh and -hhx4 and sure enough got a 10% difference in decoding speed. To verify that this was from truncated decorrelator passes I examined the frames statistically and found that over 40% used less than the maximum 16 passes, and in fact almost 15% used 10 or less (which is the fixed count for “high” mode). So that explains that...   :D
7
MP3 / Re: LAME -V3 vs -V4?
Last post by DVDdoug -
If you're not concerned about file size you can go with a "higher quality" setting, even V0 or 360kbps CBR.      But, we can't really say higher quality if we can't hear a difference.  

More compression (smaller files) is the only reason to choose a higher V-number.
8
General Audio / Re: Downsampling to 44.1 - integer vs non-integer ratio
Last post by kode54 -
What I meant was that the encoder is likely to produce the same output given the same input, regardless of the compiler used. Most integer operations are guaranteed a given precision.

No guarantees that a lossy encode will produce anything even close to the original, and any decoder you use could produce different output depending on the compilers.

I was referring to a fixed precision encoder producing fairly consistent results (compressed file) from the same input, regardless of the machine architecture used to compile or run the encoder. A fixed precision decoder should also produce consistent PCM output from the same lossy input data, regardless of the machine running it. No guarantees about round trip accuracy, though.
9
Lossless / Other Codecs / Re: Lossless codec comparison - part 3: CDDA (May '22)
Last post by TBeck -
Well, i just looked at the source code (i am a bit tired, errors likely), and it seems as if -7 translates to
-a
-b
 -l (Lsb check, wasted bits)
1023 (1024 ?) predictors
Frame size: 20480 samples (per channel)

What might be different ist the option -g, which is 0 by default and set to 5 (the possible maximum) by -7.

It is called 'Block Switching Level'. Once i knew what this meant. Not perfectly now, but i seem to remember that this controls, how hard the encoder evaluates if it is advantegous to split the frame into 2 or more sub frames with individual parameters e.g. predictors. With a frame size of 20480 it is definitely necessary to perform this to achieve good results beacuse the signal parameters are likely to change in 464 ms of audio data. I suppose, -g0 means "turn it off". A bad choice that can perfectly explain the comparitively bad performance of the second-highest setting.
10
Lossless / Other Codecs / Re: Lossless codec comparison - part 3: CDDA (May '22)
Last post by Porcus -
Mp4Als, because it is so similar to TAK, but can use up to 1024 predictors (if i remember this right) compared to TAK's 160. I found 16 files where Mp4Als is at least 0.5 percent stronger than TAK. In any case there is also a clear advantage of TAK's preset 4 over 3, which are using 160 vs. 80 predictors. Imho strong evidence that the advantage of Mp4Als is based upon the higher predictor count.

Yes, but TAK -p3 beats ALS' -a -b -o1023, where -o1023 is the predictor order ... or the max predictor order. That is the second-to-highest ALS setting used in the test.
So either the high prediction order is not the explanation - or, in case -o1023 only sets the maximum, it could be that ALS selects lower orders. The "-a" switch allows for adaptive predictor order, but I don't know how that matters.

It seems that the max framesize is 65536, Maybe that - in combination with the predictor order? -  is what makes the "-7" performance, both for good and bad?