Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: LAME / CPU Usage (Read 7556 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

LAME / CPU Usage

Hi all, hope you can help - quick question:

Using LAME 3.96.1 via Foobar2000 on Windows XP and the --alt-preset insane setting. CPU is running at 100% constantly; is this good or bad? Does running at full CPU capacity increase the chance of encoding errors?

Many thanks in advance.

LAME / CPU Usage

Reply #1
100% CPU Time for the lame prozess decreases the encoding time. Thats's it.
.halverhahn


LAME / CPU Usage

Reply #3
Sorry to interfere!
But I was thinking is there any special reason why you use Lame 3.96.1 instead of Lame 3.97?

Isn't Lame 3.97 the best Lame so far?

LAME / CPU Usage

Reply #4
Sorry to interfere!
But I was thinking is there any special reason why you use Lame 3.96.1 instead of Lame 3.97?

Isn't Lame 3.97 the best Lame so far?


I use 3.96.1 for two reasons:

1. 3.97 was in beta for ages, and i was waiting for it to become an official release - it appears that it is now (i hadn't checked for a while).

2. 3.97 is only available in source code; i want to use LAME with other programs/frontends (i.e. Foobar, Razorlame etc) but i don't know how to do utilise the source code properly.. 3.96.1 came with a lovely little .exe that worked fine (and easily!)...

Now that 3.97 is an official release i would definitely prefer to use it, but source code is no use to me with my current (lack of) knowledge.

Any help from anyone would be greatly appreciated.

Thanks again
ed

LAME / CPU Usage

Reply #5
There are several variations of lame.exe available on Rarewares here
Thunder Bolt

Strikes where least expected!

LAME / CPU Usage

Reply #6
Isn't Lame 3.97 the best Lame so far?

Not to everybody.
Lame 3.97 is fine in many respects but has a 'sandpaper noise' issue which is not extremly rare in practice.
Lame 3.98a11 has largely improved on this and also with respect to the quality of the VBR method.
Of course it's not final yet.
And looking at old 3.96 or 3.90 there's no real proof that they're clearly outperformed.
After all it's a matter of taste and beleif and of what sample is in focus which versions to use. 'Newer' means not necessarily 'better'.

Anyway I put a lot of hope into upcoming 3.98. It's not perfect yet but absolutely on the right way.
At least at high bitrate it is one of the best Lame candidates for practical usage already now.
In fact as the OP uses api, with this setting I'd prefer 3.98a11 as the best Lame version (though with cbr320 Lame version differences don't play an essential rule).
lame3995o -Q1.7 --lowpass 17

LAME / CPU Usage

Reply #7

Isn't Lame 3.97 the best Lame so far?

Not to everybody.
Lame 3.97 is fine in many respects but has a 'sandpaper noise' issue which is not extremly rare in practice.
Lame 3.98a11 has largely improved on this and also with respect to the quality of the VBR method.
Of course it's not final yet.


Do the LAME 3.98 alphas use the new VBR routine (i.e. --vbr-new) by default for ALL -V presets? Or only from -V5 and above?

I'm especially interested in -V6

LAME / CPU Usage

Reply #8
...Do the LAME 3.98 alphas use the new VBR routine (i.e. --vbr-new) by default for ALL -V presets? Or only from -V5 and above?

I'm especially interested in -V6

AFAIK 3.98 defaults to --vbr-new in general.
Just to make sure I just tried -V6 and -V6 --vbr-new: they're bitidentical.
lame3995o -Q1.7 --lowpass 17

LAME / CPU Usage

Reply #9
Better to ask here than start a new topic.

Is there a Lame version that is optimized for Intel Core2Duo?

LAME / CPU Usage

Reply #10
Is there a Lame version that is optimized for Intel Core2Duo?

This would mean a multithreaded encoder.
Ther have been efforts to do this, but AFAIK these builds never made use of the bitreservoir


LAME / CPU Usage

Reply #12

Better to ask here than start a new topic.

Is there a Lame version that is optimized for Intel Core2Duo?

There's the LAME MT project. Don't know how well it works though.

On speed: Really well
On quality: Really badly

It's much better, and easier, to run two lame threads in parallel. This can be done on windows with EAC, or on linux with a shell script.

LAME / CPU Usage

Reply #13
cabbagerat, I just discovered that, was about to post but you beat me to it.
Thought of the function Monkey's Audio prog has, simultaneus files.

tried with 2 BonkEnc running.
worked very well

Would like to see this function in BonkEnc

LAME / CPU Usage

Reply #14
It's much better, and easier, to run two lame threads in parallel. This can be done on windows with EAC, or on linux with a shell script.

But this can result in heavily fragmented files, because the encoder cannot reserve space for the whole file when it starts to write.

LAME / CPU Usage

Reply #15
It's much better, and easier, to run two lame threads in parallel. This can be done on windows with EAC, or on linux with a shell script.
But this can result in heavily fragmented files, because the encoder cannot reserve space for the whole file when it starts to write.


Actually, it's not impossible for an encoder to do this, though I believe most, if not all, do not. 

Yes, starting small and continually extending the file can result in heavily fragmented files, assuming the filesystem/filesystem-driver isn't designed to resist fragmentation.  So, with FAT32, yes.  With ext3, however, the filesystem is designed to avoid heavy fragmentation in this instance.

An encoder could take advantage of the pre-allocation feature of some filesystems, such as NTFS (using appropriate createfile flags and assuming a filesystem that is not nearly full), by allocating enough space for a worse-case-scenario file size at the start of encoding, and then trim the unused end of the file off at the end of encoding.  Worse case scenario would depend on the encoder, however:  e.g. for a lossy encoder, it would depend on the bit rate settings/strategy, but for a lossless encoder would be the entire size of the WAV PCM data plus a bit more for header and maximum metadata size.

In this scenario NTFS, for example, would look for a contiguous space to create the entire file even before you started writing to it.

Whether or not this would result in any real-world improvement is left as an exercise to the reader.

I don't think there would be a direct performance benefit to doing so right out of the box, but the fragmentation avoidance would be useful, say, in a program designed for a ripping company.

-brendan