Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: specifying mono with Lame presets (Read 5489 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

specifying mono with Lame presets

I'm encoding a CD that has a mono signal. Since it's from a cd, it is two-channel, 16-bit 44.1khz, but the two channels are bit-for-bit identical (was recorded from a single input).

If I encode using Lame 396.1 -V4, the bitrate is about 20% higher than if I specify -V4 -m m, to indicate that the input signal is mono. (With -V2, there's some difference, but very little because of the 128 kbps floor.) I find this bitrate difference between mono and the default j-stereo a little curious, and perhaps indicative of an inefficiency in Lame's joint-stereo algorithm, but that's not my main question.

I think that I may as well let Lame know that the input is mono, but what I'm wondering is this: by specifying mono, am I messing up the preset at all? Or should there be no problem with this? Thanks much.
God kills a kitten every time you encode with CBR 320

specifying mono with Lame presets

Reply #1
It's not a mono input though, unless you merge the two channels into one. Although both channels are the same, it's still a stereo source.

specifying mono with Lame presets

Reply #2
Quote
It's not a mono input though, unless you merge the two channels into one. Although both channels are the same, it's still a stereo source.
[a href="index.php?act=findpost&pid=308787"][{POST_SNAPBACK}][/a]

Tgoose is correct. If you've got 2 channels, it's not mono.

specifying mono with Lame presets

Reply #3
Or u could extract either the left or right channel to a real mono WAV file using Audacity or any audio editor, and then encode it using LAME.
(more work, but what else is there?)

specifying mono with Lame presets

Reply #4
Can you not keep it stereo, then get lame to output mono? I don't know how since I've never needed to, but I'm sure there must be some simple switch.

specifying mono with Lame presets

Reply #5
Right. Technically it's not a mono input. However, if Lame has a 20% lower bitrate when I specify -m m, then the default joint-stereo algorithm is rather inefficient and is wasting a lot of bits to encode a signal in which there are no differences between right and left channel.

My inclination is to encode specifying mono because of the bits that it saves. But I'm still wondering if this will mess with the quality of the preset (-V2 or -V4 or whatever). I know that the presets are ideally tuned to work without any extra command lines. However, it seems like specifying mono wouldn't actually mess with the encoding algorithm itself, just the stereo algorithm which I presume occurs somewhat independently of the encoding algorithm... Any thoughts on this?
God kills a kitten every time you encode with CBR 320

specifying mono with Lame presets

Reply #6
I'm a tad skeptical that the input signal is pure mono. Have you actually done a bit-for-bit comparison? Maybe you could post a sample.

EDIT: Ok nevermind, I did a test and I see that there is a size difference. But the extra bits aren't being wasted, they are increasing the quality of the mp3, provided you used joint-stereo and not regular stereo.

specifying mono with Lame presets

Reply #7
Quote
However, it seems like specifying mono wouldn't actually mess with the encoding algorithm itself, just the stereo algorithm which I presume occurs somewhat independently of the encoding algorithm... Any thoughts on this?


[edit warning: this is all incorrect.....]

I dont think it should significantly harm the preset but there is a certain tendency for the available maximum framesize to be more limited for a single channel than for one of two channels, because m j and m s share the max 320 framesize between the channels ,but being able to give more than 160kbs to a channel in stereo mode, isnt something the encoder could count on anyway, just that sometimes it could if it helped.
Its interesting because when encoding mono in -m j mode, the separation is always low/zero, so should maybe require just 8kbs to represent as silence, so the mid channel will have up to 312 kbs available to accomodate complexity. In '-m m' mode it will have only up to 160 kbs, so there is a case that -m m limits complexity more than -m j.
-at least I think that is how the framesize distribution works..

Note that the encoders main task is too reduce complexity anyway, so having very large framesizes available should very rarely be advantageous to actualy use.

Certainly -m s mode would be mad for encoding mono, but -m j could be preferable to mono. As usual we can only guess without test results. Im tempted to think if there was a problem with using mono it would have been documented by now.
no conscience > no custom

specifying mono with Lame presets

Reply #8
Quote
I'm a tad skeptical that the input signal is pure mono. Have you actually done a bit-for-bit comparison? Maybe you could post a sample.

EDIT: Ok nevermind, I did a test and I see that there is a size difference. But the extra bits aren't being wasted, they are increasing the quality of the mp3, provided you used joint-stereo and not regular stereo.
[a href="index.php?act=findpost&pid=308817"][{POST_SNAPBACK}][/a]


Deep_Elem: it is bit-for-bit the same left and right. But I don't see how increased bits mean an increased quality mp3 if they're going for joint-stereo over mono, given that there isn't any stereo data here.
Unless it's b/c the encoding algorithm is optimized for j-stereo, as ChiGung hypothesized.
God kills a kitten every time you encode with CBR 320

specifying mono with Lame presets

Reply #9
As I understand it, if the L and R channels are identical, j-stereo will result in the mid channel being equal to the L and R channels, i.e. M=L=R and the side channel will be 0. Where R=L, this computes as follows:

M = (L+R)/2 = (L+L)/2  = 2L/2 = L
S = (L-R)/2 = (L-L)/2 = 0

Therefore, the encoder would use an extremely small number of bits (practically but not quite zero) in order to encode the S channel. Hence all the remaining bits would be used to encode the M channel. Since your j-stereo file is bigger than your mono file it follows that it is of higher quality since (almost) all the bits in the j-stereo file are encoding the M channel, while in the mono file all bits are encoding the L (or R) channel, which is the same as the M channel. I.e. more bits encoding the same channel results in higher quality, all other settings being equal. I can't see how this could be done 'inefficiently' (aside from an actual bug) as it is pretty straight-forward.

I can't give an explanation as good as ChiGung's for why the mono encode is at a lower bitrate. Logically, it MUST be because the encoding algorithm is applying lower quality to the single channel wave file on encoding. ChiGung's explanation of why this would be the case makes sense to me. If there is an inefficiency anywhere, based on CG's explanation, it would be in the way the encoder handles single channel input, not in the way it applies j-stereo to identical two-channel input.

Btw, does anyone know the precise difference between -m m and -a?

specifying mono with Lame presets

Reply #10
Btw, here are some findings:
1. I've verified that the wav file is bit-for-bit identical between right and left channels
2. When I use Audacity to form a mono wav file (so, half the size of the stereo) and then encode it using -V4, the outputted mp3 is bit-for-bit equivalent to the mp3 produced by encoding the (false) stereo wav using -V4 -m m.

Hence I can conclude that
* the -V4 algorithm is unaffected by specifying -m m
* Lame's joint-stereo algorithm is inefficient and uses extra bits on files that should be mono but really aren't (again, bitrate 20% higher, which is certainly above "practically but not quite zero")
God kills a kitten every time you encode with CBR 320

specifying mono with Lame presets

Reply #11
Quote
Btw, here are some findings:
1. I've verified that the wav file is bit-for-bit identical between right and left channels
2. When I use Audacity to form a mono wav file (so, half the size of the stereo) and then encode it using -V4, the outputted mp3 is bit-for-bit equivalent to the mp3 produced by encoding the (false) stereo wav using -V4 -m m.

-m m in that case tells lame to produce a single channel mp3, otherwise if it recieves a 2 channel wav it will produce a 2 channel mp3.

-a is used to downmix raw pcm, because in the case of raw pcm input you need to tell lame how many channels the input has using -m m or -m s (its a little mixed up there)

Quote
Hence I can conclude that
* the -V4 algorithm is unaffected by specifying -m m

I think -m m does what it should
Quote
* Lame's joint-stereo algorithm is inefficient and uses extra bits on files that should be mono but really aren't (again, bitrate 20% higher, which is certainly above "practically but not quite zero")

The increased bitrate may or may not be due to:
-reduntant packaging of the extra channel
-increased encoding accuracy that the extra channel facilitates
-or some encoding inefficiency 

Sure could be enlightening to examine the actual bitstream produced - check the sizes of the side channel encodes, theres a graphical analyser thingy in the source code i think, never bothered to work it myself...
no conscience > no custom

specifying mono with Lame presets

Reply #12
tim, I simply don't agree that your recent conclusions follow logically from the tests you have run. All that you have shown by encoding a one-track wave file is that the -m m switch tells the codec to treat the input as a single track, i.e. it probably just reads one channel and ignores the other. No conclusions can be made about j-stereo from this since j-stereo isn't used in either case.

LAME's joint stereo algorithm has been tested over and over again. LAME is optimized for encoding two-channel input using joint stereo. It is not optimized for encoding single-channel input. Hence, until I hear from a LAME developer about this, I stand by what I posted before.

specifying mono with Lame presets

Reply #13
*Mono is not limited to 160kbps.

*What you are encountering is a safeguard in the jonit stereo algorithm: we do not allow side channel to be totally removed. If your input is twice the same channel, then it will be innefficient, but that is not a big problem as this is quite rare compared to real stereo content.

specifying mono with Lame presets

Reply #14
Quote
*Mono is not limited to 160kbps.

no conscience > no custom

specifying mono with Lame presets

Reply #15
Thanks for wading in Gabriel.

That seems like a rather large inefficiency for a safeguard, but so be it. I'll remember to use the -m m switch if I'm dealing with two identical channels from now on.

specifying mono with Lame presets

Reply #16
Out of curiosity, if you encode with joint stereo, then decode back to wave, are the L+R channels STILL bit-identical?

specifying mono with Lame presets

Reply #17
Quote
-m m in that case tells lame to produce a single channel mp3, otherwise if it recieves a 2 channel wav it will produce a 2 channel mp3.


Sorry, but that's just plain wrong.
If you use -mm on a stereo input file, the encoder sums the two channels together and outputs a (summed) single channel mp3.

------
Edit

Hang on ChiGung.... maybe I misread your post. And I may have replied in haste.  My apologies. The important word there was 'otherwise'.
I'll just STFU.
Cheers,
Bruce.
www.audio2u.com
The home of quality podcasts
(including Sine Language, a weekly discussion on all things audio)

specifying mono with Lame presets

Reply #18
No worries, I cant be too offended having put my own foot in it earlier
no conscience > no custom

 

specifying mono with Lame presets

Reply #19
Thanks for clarifying, Gabriel.
I agree with Deep Elem that 20% seems a large ineffiency (but note that the sound quality was pretty poor in this encode, so it's an increase from 80 kbps to 97 kbps, stuff like that. If it were normal-quality/complexity music, it's possible that the % increase would have been much less.

For the record, here's my logic on the inefficiency of the joint-stereo algorithm *in this case*
* encoding a two-channel wav in which the L and R channel are identical, the joint-stereo algorithm should put nearly all the bits in the M channel, which I'd expect to be nearly identical to mono, with a little bit of extra bit usage.
* When I encode the two-channel wav file, the bitrate is 20% higher than when I first make the wav file into a mono wave file, and then encode. Same settings (-V4, in this case).
* I presume that "padding" to run the j-stereo algorithm is minimal, which is less than 20%
* Hence, I conclude (in this case) some inefficiency in the j-stereo algorithm's bit usage.


Finally, I don't know if the "default"-encoded mp3 decodes to bit-for-bit identical between R and L channel.
God kills a kitten every time you encode with CBR 320