Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Choosing the lame competitor (Read 44239 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Choosing the lame competitor

I propose a pre-test of the prospective competitors for the lame mp3 slot in Roberto's upcoming test.  So far, I think the competitors are:

3.90.3 --alt-preset 128 --scale 1
3.96 --preset 128 --scale 1
3.96 -V5 --ath-sensitivity 1

It would be nice to get a handful more people to listen to all 12 samples that tigre, [proxima] and myself have used so far.  The last setting has not been tested enough to declare that it's definitely better than the others, although there's a good chance that it is.

I know Gabriel is trying to tune a version to fix the high-frequency ringing problem that --ath-sensitivity 1 is helps with, but my purpose is to fill the slot in Roberto's test with the best current version and setting.

If there is enough interest expressed in this thread, I can set up the configuration files and other binaries needed to make the testing convenient.

ff123

Choosing the lame competitor

Reply #1
If enough people are interested, I will postpone the test another week.

I agree with ff123, it's a worthy pre-test, considering we should be testing MP3 at it's best. The vorbis people already did a good job guaranteeing the best tuning gets tested, I think the same should be done for MP3.

Best regards;

Roberto.

Choosing the lame competitor

Reply #2
Code: [Select]
3.96 -V5 --ath-sensitivity 1

I would prefer not having this one in the test.

First, it is a custom command line, and I think that the test should be representative of typical candidate usage. I do not think that custom command lines should be "typical usage"

Second, I'd like to increase the -V5 bitrate soon, because right now it is too close from V6 and too far from V4.

I would really prefer a plain --abr 128, with perhaps the addition of --scale 1 to match loudness of other competitors.

Choosing the lame competitor

Reply #3
Gabriel:
Quote
I'd like to increase the -V5 bitrate soon, because right now it is too close from V6 and too far from V4.

In case the result is that -V 5 --athaa-sensitivity 1 is the best ~ 128kbps ABR/VBR setting, why don't you completely ditch -V 6, remap -V 5 --athaa-sensitivity 1 to "-V 6" and create the new -V 5 with increased bitrate as you're planning to do? This way the mentionable ammount of testing put in 3.96 at ~128kbps ABR/VBR would stay useful.

I see your point though. Using/recommending long-ish custom commandlines is a step back and should be avoided. But IMO the better way is to map the best available commandline (quality wise) to a simple one in the next version/release.

Of course if there's another (better?) promising way than changing --athaa-sensitivity to improve the ringing problems with plain -V 5, tell it, please (or release some test compiles...).

Quote
I would really prefer a plain --abr 128, with perhaps the addition of --scale 1 to match loudness of other competitors.

Do you mean to help to move from --preset xxx to (straigthforward but identical) -abr xxx ?
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello

Choosing the lame competitor

Reply #4
Quote
Code: [Select]
3.96 -V5 --ath-sensitivity 1

I would prefer not having this one in the test.

First, it is a custom command line, and I think that the test should be representative of typical candidate usage. I do not think that custom command lines should be "typical usage"

Second, I'd like to increase the -V5 bitrate soon, because right now it is too close from V6 and too far from V4.

I would really prefer a plain --abr 128, with perhaps the addition of --scale 1 to match loudness of other competitors.

First, let me correct --ath-sensitivity 1 to --athaa-sensitivity 1

I discovered my error when I actually tried the command line.  The effect it has on frequencies 13 kHz and up in ATrain.wav is visible in spectral view (I cannot hear above 12 kHz in normal music), so [proxima]'s description of ringing artifacts which are reduced with this switch makes sense.

I understand the desire to stay away from custom command lines.  We might be inviting a legion of tweakers to step in and try their own custom lines.  But why not, as long as they are backed up by listening comparisons of multiple samples and preferably multiple people.

In any case, I think it is quite likely that if a group of people were to test the modified vbr command line over a group of samples, they would prefer it to the abr settings.  Already, just averaging tigre's, [proxima]'s and my results using the plain-vanilla -V5 setting, we were on the verge (not quite 95% confidence) of declaring it the outright winner over the abr command lines.  Reducing the high-frequency ringing probably would have pushed it over the edge.  I think it's fair to say even with the limited testing that in terms of pre-echo, it's obvious that vbr is better.

And will Roberto's test really be representative of typical usage?  How many people use AoTuV?  I would argue to find the current best and use that.

ff123

Choosing the lame competitor

Reply #5
Quote
The effect it has on frequencies 13 kHz and up in ATrain.wav is visible in spectral view (I cannot hear above 12 kHz in normal music), so [proxima]'s description of ringing artifacts which are reduced with this switch makes sense.

Thanks for verifing this, i don't need to confirm myself the results because i can hear problems for sure but i can understand how sometimes a spectral analisys is a good way to trust a listener. I really hate to do listening test again because ringing problems are not present in all the 12 samples. I think that finding some more cases where ringing is reduced is a much more fruitful way of testing. In my ratings, some samples are really good for preecho but OTHERS penalized for ringing. What do you think ?
Quote
I understand the desire to stay away from custom command lines.  We might be inviting a legion of tweakers to step in and try their own custom lines.  But why not, as long as they are backed up by listening comparisons of multiple samples and preferably multiple people.

Good point. I'm the first person who stay away from "custom" command lines. But if can hear problems i think i'm legitimate to use a tweaked line that can (at least) reduce some problems and address tweaking in a straight direction.  When i reported the use of "--athaa-sensitivity 1" i included some ABC/HR logs so i'm supporting my subjective perception with facts.... i see no reason to stay away from such command line, we need more testing, indeed.
Quote
Reducing the high-frequency ringing probably would have pushed it over the edge. I think it's fair to say even with the limited testing that in terms of pre-echo, it's obvious that vbr is better.

I agree 100% with you. On the other side, i've already reported ringing problems even with -V4 (reduced again with --athaa-sensitivity 1) so the target should not be only 128 kbps or Roberto's test, but VBR mid-low presets. I want to precise that --athaa--sensitivity 1 does nor resolve ringing at all but considerably reduces it, only with "6" ringing is inaudible for me but bitrate grow too much. This magic switch is not definitive, my report should be intended as a (good) starting point and i think this community will be proud to help Gabriel with further LAME tuning.
Quote
I know Gabriel is trying to tune a version to fix the high-frequency ringing problem that --ath-sensitivity 1 is helps with

I really appreciate his effort, i'm completely available for testing if necessary.
WavPack 4.3 -mfx5
LAME 3.97 -V5 --vbr-new --athaa-sensitivity 1

Choosing the lame competitor

Reply #6
Quote
First, it is a custom command line, and I think that the test should be representative of typical candidate usage. I do not think that custom command lines should be "typical usage".

That would be only useful if all competitors were official releases with standard settings. If you look at the vorbis thread, you can see that this is not the case.

I personall would keep the V5 setting at its current size and rather remap V4 and V6. 128 kbps is a magical border and it would be fine to have a VBR setting that averages around this value.


Just let them try to test that out Gabriel. Maybe you can be convinced and -V5 --athaa-sensitivity 1 will be the base for future pesets.

EDIT: typo

Choosing the lame competitor

Reply #7
I want to test -V5 but if it is going to be altered in the near future I dont think there is any point.

Definately a 3.96 setting as it seems to perform at least aswell as 3.90.3 in the recent test and for all the samples I have tested.


Choosing the lame competitor

Reply #8
Hopefullly there are potential testers out there

Attached is a zipped file which contains the lame 3.90.3 and 3.96 executables, flac 1.1.0, the batch files to encode/decode, the abchr application, and the config files that abchr uses.

3.90.3 --alt-preset 128 --scale 1
3.96 --preset 128 --scale 1
3.96 -V5 --athaa-sensitivity 1

Unzip the attached file into a test directory and place the 12 sample flac files in that same directory.  Here are links to the sample files:

ATrain.flac
BachS1007.flac
BeautySlept.flac
Blackwater.flac
FloorEssence.flac
Layla.flac
LifeShatters.flac
LisztBMinor.flac
MidnightVoyage.flac
thear1.flac
TheSource.flac
Waiting.flac

After you've downloaded the flac files, go to the Bin directory and execute each of the encode/decode batch files.

Then fire up abchr and load the config files to start testing.

ff123

I suggest summarizing results for all 12 files in the following format for easier processing:

Code: [Select]
                3.90.3_ap128   3.96_p128     3.96_V5as1
ATrain             4.3            4.3            4.3
BachS1007          4.3            4.3            4.3
BeautySlept        4.3            4.3            4.3
Blackwater         4.3            4.3            4.3
FloorEssence       4.3            4.3            4.3
Layla              4.3            4.3            4.3
LifeShatters       4.3            4.3            4.3
LisztBMinor        4.3            4.3            4.3
MidnightVoyage     4.3            4.3            4.3
thear1             4.3            4.3            4.3
TheSource          4.3            4.3            4.3
Waiting            4.3            4.3            4.3


Note that the results files don't present the results in the same order every time.
For the sake of lame development, zip and attach the results files.

ff123

Choosing the lame competitor

Reply #9
Quote
Of course if there's another (better?) promising way than changing --athaa-sensitivity to improve the ringing problems with plain -V 5, tell it, please (or release some test compiles...).

This is telling that there is a problem related to ath level. Adjusting athaa-sensitivity might work around the problem, but this is only a workaround.

It is an interesting hint for me, but I do no think (yet) that it will be directly used to modify V5 in next releases. I think that a 32dB margin for ath adjustement is overkill for a 128kbps encoding. So there is a problem related to this point (ath level)

This is helping me, but once again I do not think that it is a good thing to use it as a basis for Roberto's test.

Quote
I want to test -V5 but if it is going to be altered in the near future I dont think there is any point.

Quite the contrary in fact. It might be altered based on test results only.

Choosing the lame competitor

Reply #10
I too have to agree with gabriel. Using custom commandlines in the test would imho be a bad decision, because:

1. using custom commandlines will make the test somewhat "theoretical", instead of reflecting normal usage of the encoder.

2. With 1. in mind, we can expect the results of this test to get wide news coverage on the net, as well as getting often linked to from other sites. So, in other words, the test-results will be read by lots of average-users. If custom-commandlines get used, then the test-results will NOT reflect what the average user who uses lame in a reasonable way(no custom-commandlines) will get in normal conditions(the results will be theoretical and not reflect normal usage). If lame doesn't perform 100% optimal with standard commandlines, then this is a "bug"(sorry, cannot thing of a better word) with the encoder, and should be reflected in the test too.

3. With 2. in mind(test results getting wide coverage) it will promote using custom commandlines in the public - which may be much more harmful to HA's goals than sacrificing a possible slight quality improvement.

Of course, one could now argue that vorbis then should have been represented by an official xiph release too, instead of a custom-fork. But i think the situation with vorbis is a slightly different one, because xiph, contrary to lame, aren't that "fast" in fixing serious bugs and flaws. The HF-Noise issue has now been for how long - two years? - with xiph doing nothing about it - so, creating another fork was the only way to improve vorbis. That is not the case with lame, which gets updates and bugfixes MUCH more often and regulary.

- Lyx
I am arrogant and I can afford it because I deliver.

Choosing the lame competitor

Reply #11
Quote
Do you mean to help to move from --preset xxx to (straigthforward but identical) -abr xxx ?

Yes.
If abr preset is used in the test, I'd prefer to see "--abr xxx" mentionned instead of "--preset xxx"

Choosing the lame competitor

Reply #12
Thanks for your answers, Gabriel.

Quote
Quote
I want to test -V5 but if it is going to be altered in the near future I dont think there is any point.

Quite the contrary in fact. It might be altered based on test results only.

Good to know. I think it's very motivating for testers to see that concrete development takes place based on their test results (at least it's motivating for me). Give us more of that. 


About what to include in 128kbps test I still disagree:

A main goal of HA is to find out "the truth about audio quality" (wasn't that the slogan of "another" forum? ). If we get statistically valid results showing that 3.96 -V 5 --athaa-sensitivity 1 gives the best ~128kbps mp3 files available right now, this setting will make it in HA's recommended lame settings. (BTW: One of HA's recommended lame settings right now is this: --abr 128 -h --nspsytune --athtype 2 --lowpass 16 --ns-bass -8 --scale 0.93).

IMO It's all a matter of communication. In case -V 5 --athaa-sensitivity 1 makes it to recommended settings and to multiformat test, this can be regarded as sign that lame is under development (= not "dead") and that HA recommendations and the choice of commandlines/encoders for rjamorim's test is based on testing. The test can link to HA's recommended lame settings and we can put some explanation there like
Quote
The commandline was chosen because of <blah>. In the next lame versions this commandline will be replaced by a simpler one based on co-operation between developers and testers. Check this thread regularly (or suscribe to it) for updates.

This should also cover Lyx's points: Good documentation/communication should help against most of your concerns.
Quote
But i think the situation with vorbis is a slightly different one, because xiph, contrary to lame, aren't that "fast" in fixing serious bugs and flaws.

Hehe - LAME isn't "fast" enough to fix this flaw before rjamorim's tests starts either. (But lame is certainly as fast as one can expect generally.) "Someone" could quickly compile a 3.96.1 HA version that remaps -V 5 --athaa-sensitivity 1 to -V 5 to use this "simple commandline" version in the test. IMO this would be comparable to using some Vorbis anti-hf-boost tuned fork. (Only a theoretical idea, DON'T do it (yet), please - John33 or whoever likes the idea  ).

For me there are 2 main points why rjamorim's multiformat test is valuable:
- To find out what codec to use for DVD -> CD rips if sound quality is somewhat important (like live concerts etc.)
- To find out what portable to buy (small flash-based one for running etc.) - is mp3 good enough or do I want WMA/Vorbis/AAC support?
In any case I want to know the performance of the best available encoder/setting, especially if there are chances that the problems the custom commandline modification is a workarround for, will be solved in future versions. I'd regard using a commandline that still suffers from these problems as a bias against mp3 that would make the test not very useful.

After all, it's rjamorim's decision I guess.
Let's suppose that rain washes out a picnic. Who is feeling negative? The rain? Or YOU? What's causing the negative feeling? The rain or your reaction? - Anthony De Mello

Choosing the lame competitor

Reply #13
Quote
Attached is a zipped file which contains the lame 3.90.3 and 3.96 executables, flac 1.1.0, the batch files to encode/decode, the abchr application, and the config files that abchr uses.

The zip file contains only abchr executable and config files. Can you check it please ?

edit: thanks jianxin yan, ignore this post.
WavPack 4.3 -mfx5
LAME 3.97 -V5 --vbr-new --athaa-sensitivity 1

Choosing the lame competitor

Reply #14
proxima: I just now download the zip file, and it included all the required files except the original *.flac files.

try again please!

Note: the *.flac files should be placed into the same directory with config*.txt.

PS: I only get a index.php file by using flashget download tool.
      and get the zip file by selecting as save  with the right key of mouse.

Choosing the lame competitor

Reply #15
Quote
After all, it's rjamorim's decision I guess.



You guys want to kill me...

Choosing the lame competitor

Reply #16
My results so far (I will edit this post as I complete samples):

Code: [Select]
                3.90.3_ap128   3.96_p128     3.96_V5as1
ATrain             4.5            3.9            4.7
BachS1007          5.0            5.0            5.0
BeautySlept        4.4            4.2            4.7
Blackwater         4.4            4.8            4.4
FloorEssence       4.2            4.5            4.8
Layla              4.5            4.7            5.0
LifeShatters       5.0            5.0            5.0
LisztBMinor        5.0            5.0            5.0
MidnightVoyage     4.0            4.6            5.0
thear1             4.5            4.5            5.0
TheSource          5.0            5.0            5.0
Waiting            3.5            4.0            4.5


3.96_V5a 3.96_p12 3.90.3_a
 4.84     4.60     4.50  

---------------------------- p-value Matrix ---------------------------

        3.96_p12 3.90.3_a
3.96_V5a 0.024*   0.002*  
3.96_p12          0.328    
-----------------------------------------------------------------------

3.96_V5as1 is better than 3.96_p128, 3.90.3_ap128


ff123

3.96 -V5 wins for me.  The two abr settings are tied.


Found a minor bug in abchr.  When a new configuration file is opened after completing a test, the general comments aren't cleared.

 

Choosing the lame competitor

Reply #17
My partial results:
Code: [Select]
               3.90.3_ap128   3.96_p128     3.96_V5as1
ATrain             2.5            1.7            3.5
BachS1007          5.0            2.5            3.7
BeautySlept        2.3            3.0            2.7
Blackwater         3.0            2.0            2.9
FloorEssence       2.5            1.5            3.0
Layla              2.8            1.5            3.5
LifeShatters       4.5            3.8            3.8
LisztBMinor        3.3            2.2            3.0
MidnightVoyage     2.3            2.5            3.5
thear1             3.3            3.0            4.0
TheSource          3.5            3.2            4.5
Waiting            2.5            2.0            3.0

Code: [Select]
3.96_V5a 3.90.3_a 3.96_p12 
30.50    26.00    15.50  

---------------------------- p-value Matrix ---------------------------

        3.90.3_a 3.96_p12
3.96_V5a 0.358    0.002*  
3.90.3_a          0.032*  
-----------------------------------------------------------------------

3.96_V5as1 is better than 3.96_p128
3.90.3_ap128 is better than 3.96_p128


3.96_V5as1 is usually better than 3.90.3 but sometimes it pays for slight residual ringing (especially with low volume samples) and maybe for the stronger lowpass.

edit: test completed
WavPack 4.3 -mfx5
LAME 3.97 -V5 --vbr-new --athaa-sensitivity 1

Choosing the lame competitor

Reply #18
Partial results. I hope to do some more later or tomorrow

Code: [Select]
                3.90.3_ap128   3.96_p128     3.96_V5as1
ATrain             2.0            2.5            3.0
BachS1007          5.0            5.0            5.0
BeautySlept        n/a            n/a            n/a
Blackwater         2.2            1.7            3.0
FloorEssence       1.5            2.2            3.0
Layla              2.5            1.8            2.8
LifeShatters       n/a            n/a            n/a
LisztBMinor        n/a            n/a            n/a
MidnightVoyage     2.2            1.0            3.0
thear1             n/a            n/a            n/a
TheSource          n/a            n/a            n/a
Waiting            n/a            n/a            n/a

Choosing the lame competitor

Reply #19
Album bitrate table:

Album bitrates are calculated on an album basis, meaning that the total album size in bytes is divided by the total album time in seconds (multiplied by 0.008).

The average and standard deviations at the bottom of the bitrate table are calculated using the album bitrates (treating each album as an equal contributor).

Lame 3.96 -V5 --athaa-sensitivity 1
WMA9 standard, quality VBR 75
WMA9 standard, bitrate VBR 128
Musepack 1.14 beta --quality 4.15 --xlevel
AoTuV beta 2 -q 4.35

Code: [Select]
1.  Beatles, Abbey Road                       rock/pop
2.  Sarah McLachLan, Fumbling Into Ecstasy    pop
3.  Radiohead, OK Computer                    rock
4.  Green Velvet, Green Velvet                electronic
5.  Mel Torme, Great American Songbook        jazz/orchestra
6.  Moby, Play                                electronic
7.  Roger Waters, Amused to Death             rock
8.  Slipknot, Slipknot                        metal
9.  Cowboy Junkies, Trinity Session           country/blues
10. Symphonie Fantastique                     orchestra
11. Dire Straits, Brothers in Arms            rock
12. Mystikal, Let's Get Ready                 hip-hop
13. Portishead, Dummy                         trip-hop
14. Metalmeister, A Metal Blade Compilation   metal
15. Eagles, Hotel California                  rock
16. Jazz at the Pawnshop                      jazz
17. Donald Fagen, The Nightfly                pop/rock
18. Peter Kater, Gateway                      new age

                wma9   wma9   mpc    mpc    ogg    ogg
Album    lame   q75    128vbr 4      4.15   4      4.35

1.       146    127    130    125    131    120    128
2.       119    112    129    117    123    118    125
3.       135    120           128    134    127    135
4.       129    129    129    128    135    125    133
5.       107    116    129    111    116    117    124
6.       130    124           114    119    116    123
7.       134    118    129    117    122    118    126
8.       134    120           117    124    113    121
9.       137    150    129    140    146    127    135
10.      118    88            107    112    114    122
11.      127    124    130    121    126    120    128
12.      128    124           125    130    119    127
13.      101    81             86     90    103    108
14.      137    120           128    135    125    133
15.      137    119    129    126    132    124    132
16.      140    120           133    140    119    122
17.      133    137           136    143    125    133
18.      145                         149           144

Ave      130    121    129    121    128    120    128
Std dev   12     17      0     13     14      6      8

Choosing the lame competitor

Reply #20
Averaging [proxima]'s, gecko's partial results, harashin's, polpo's, and mine together:

Code: [Select]
3.96_V5  3.90.3   3.96     
 4.11     3.69     3.55  

---------------------------- p-value Matrix ---------------------------

        3.90.3   3.96    
3.96_V5  0.004*   0.000*  
3.90.3            0.278    
-----------------------------------------------------------------------

3.96_V5 is better than 3.90.3, 3.96


For the bitrate tests, I'm going to add mpc 1.14 beta at --quality 4 --xlevel using the same albums to make sure that I get the same average bitrates as previous album bitrate tests.

Edit:  Averaged in harashin's results
Edit1:  Averaged in polpo's results

Choosing the lame competitor

Reply #21
I've finished testings for 12 samples.
Code: [Select]
               3.90.3_ap128   3.96_p128     3.96_V5as1
ATrain             3.0            3.0            4.0
BachS1007          3.5            4.5            3.0
BeautySlept        1.5            1.8            3.8
Blackwater         4.3            5.0            4.0
FloorEssence       3.7            3.5            4.5
Layla              1.0            2.0            3.0
LifeShatters       5.0            4.0            4.0
LisztBMinor        5.0            4.5            4.0
MidnightVoyage     3.5            3.0            5.0
thear1             4.0            4.5            4.0
TheSource          3.0            3.0            4.0
Waiting            1.0            2.0            2.8

avg.               3.21           3.40           3.84

results

Choosing the lame competitor

Reply #22
I added mpc to the bitrate table.  Both mpc and wma9 seem to be encoding a bit on the low side; maybe my album list isn't quite as stressing as it could be.

ff123

Choosing the lame competitor

Reply #23
Just one thing, for reflexion and not criticism.
I've encoded today various tracks with the VBR command line. The average bitrate is around 128 kbps (sometimes less, sometimes more). It's OK. I can't seriously test anything with the hardware setting in my disposition.
But I noticed, as expected, that the VBR command line produced very low bitrate encoding on some quiet tracks. For extreme exemple, I've a stereo piano encoding at 59 kbps! After MP3gain process and played on my portable player, sound is crap.
On more common tracks, same problem appears on some note extinction, or at the beginning/end of the file.

It's perfectly understandable that lame VBR performs better on difficult tracks as samples tested by users here. And that's a good news. But on less immediately difficult passage, VBR is a contestable choice.

Anyway, I'll test VBR vs ABR the next time. Results posted here are really promising.




For exemple:

Code: [Select]
Bitrates:
----------------------------------------------------
32     |||||||||||                                     6.7%
40     ||||||||||||||||                                9.8%
48     |||||||||||||||||||||||||||||||                18.2%
56     ||||||||||||||||||||||||||||||||||||||||       23.3%
64     |||||||||||||||||||||||||||                    16.1%
80     |||||||||||||||||||||||||||||||||||||          21.9%
96     |||||                                           3.3%
112     |                                               0.6%
128                                                     0.1%
----------------------------------------------------

Type                : mpeg 1 layer III
Bitrate             : 59
Mode                : joint stereo
Frequency           : 44100 Hz
Frames              : 7170
ID3v2 Size          : 0
First Frame Pos     : 0
Length              : 00:03:07
Max. Reservoir      : 66
Av. Reservoir       : 25
Emphasis            : none
Scalefac            : 1.2%
Bad Last Frame      : no
Encoder             : Lame 3.96

Lame Header:

Quality                : 47
Version String         : Lame 3.96
Tag Revision           : 0
VBR Method             : vbr-old / vbr-rh
Lowpass Filter         : 17000
Psycho-acoustic Model  : nspsytune
Safe Joint Stereo      : no
nogap (continued)      : no
nogap (continuation)   : no
ATH Type               : 4
ABR Bitrate            : 32
Noise Shaping          : 1
Stereo Mode            : Joint Stereo
Unwise Settings Used   : no
Input Frequency        : 44.1kHz

--[ EncSpot 2.1 ]--[ http://www.guerillasoft.com ]--

Choosing the lame competitor

Reply #24
Quote
But I noticed, as expected, that the VBR command line produced very low bitrate encoding on some quiet tracks. For extreme exemple, I've a stereo piano encoding at 59 kbps! After MP3gain process and played on my portable player, sound is crap.
On more common tracks, same problem appears on some note extinction, or at the beginning/end of the file.


I added AoTuV beta 2 -q 4 to the bitrate table.

From the bitrate table, wma quality vbr 75 might have similar issues.  Of the vbr codecs tested, it has the highest range of bitrates on an album basis.

It looks like mpc, wma, and ogg are pretty evenly matched on average, but lame -V5 looks like it will turn out to have an 8-9 kbit/s advantage.

ff123