Quote from: paracent on 14 August, 2013, 07:05:38 PMlooksThis is the problem, full stop.We listen with our ears, not our eyes. How the signal "looks" is irrelevant.Brush up on TOS #8 before posting on this topic again.
are you really using 16.5kHz lo pass for all your music needs? - this would explain...
When using a lossy codec, you probably want to use a codec, setting, profile, feature, whatever, that is >>less<< lossy than the other one
Quote from: paracent on 15 August, 2013, 09:51:33 AMWhen using a lossy codec, you probably want to use a codec, setting, profile, feature, whatever, that is >>less<< lossy than the other oneHow is this being judged?You found that a full-scale sinusoid gives problems with a low-pass setting that is 1% above it. Despite what David said, I don't see this a problem with Lame, rather I see this as another example of how to break a codec.Besides your eyes, what has given you the idea that -V0 without a user-configured low-pass doesn't provide enough cow bell?
"Humor me" - This would be off-topic. Find your amusement somewhere else.
On the technical side of the problem:I believe this might not get fixed, not because of being a special case, but because of the way it is implemented.This is not an IIR or FIR lowpass filter working in the time domain, but instead, the filter is applied over the calculated band energies in the mdct tool. In some way, it could be considered as a filter that acts in the frequency domain, and the mirroring in question is similar to the artifact of a resampler with a soft filter. (with the difference that in this case it is mirrored above the cutoff, not below).I would like to mention that the problem happens in the whole frequency range.For example, a 15.5Khz sine sampled at 48Khz and using lame -V0 --lowpass 16 (which means "Using polyphase lowpass filter, transition band: 15677 Hz - 16258 Hz") also shows the mirroring. (In this case, at 16Khz).Usually, we should expect signals at these frequencies to be down -30dBFS or more, and the aliasing intensity to decrease fast the further from the frequency cutoff point.
not here to humor anybody,
beside my ears, I also need a proof
the point the OP seems still to have missed, and it's been said several times in this thread, is that because lossy codecs intentionally change the waveform (though hopefully not the sound), you need to keep the signal away from 0dB to avoid clipping. Usually the clipping is caused by the adaptive lossiness of the encoder rather than it's "fixed" filter, and usually the clipping is inaudible, but it's still good practice and not news.
FWIW, I don't think this is resulting from clipping. Feel free to generate the samples and have a look for yourself.
Regarding the importance of eliminating clipping, I'm still waiting for another example of a published title of music that causes an audible problem when a lossy-encoded version is decompressed to fixed point.
This is a non-issue.
Clipping in the mastering stage, which necessarily means prior to lossy compression, is completely irrelevant here.The challenge was to provide a track that is properly mastered and therefore does not clip, but that is forced to exhibit audible clipping by subsequent lossy compression.