Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: 'Normalization' of PCM audio - subjectively benign? (Read 140770 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

'Normalization' of PCM audio - subjectively benign?

Reply #25
And the point here is that, IMHO, LP transfers don't need much DSP at all. The vast majority of the editing I do concerns the removal of impulse noise, which involves edits to small isolated sections of waveform. The changes made to the waveform at those locations vastly swamps any changes to the quantisation noise that may result.

The only "global DSP" I do to LP recordings is normalisation (nearly always), some modest EQ (very rarely), and broadband noise reduction (sometimes, and only in moderation). Apart from normalisation, these other operations again make a much bigger change to the audible nature of the music than the minor change in quantisation noise they cause (which I still maintain will remain beneath the vinyl noise floor).


I have to disagree - even the DSPs you list should be done at higher than 16-bit.

If one is simply recording to PCM and perhaps remove (!) a few clicks, then straight to 16-bit is fine.

But even normalization involves multiplying each sample by an arbitrary integer without regard to their values relative to each other, and rounding errors will certainly effect the resulting waveform and it's sound - I've heard this definitively. These tiny changes in the relative values of the samples might seem inconsequential in theory, but the ear/brain is a very sensitive 'device'.

Further, it's the very low level (ambience and such like) signals which are brought into prominence, and which in the original waveform are quantized with too few bits to sound realistic.

It's worth bearing in mind that when 16/44.1 PCM was 'invented', there was no such thing as DSP at all (even if it was envisaged), it didn't appear until well into the late 80's. The universal adoption of 24-bit resolution (and much higher sampling rates) was mainly to faciltate them.

'Normalization' of PCM audio - subjectively benign?

Reply #26
But even normalization involves multiplying each sample by an arbitrary integer without regard to their values relative to each other, and rounding errors will certainly effect the resulting waveform and it's sound - I've heard this definitively. These tiny changes in the relative values of the samples might seem inconsequential in theory, but the ear/brain is a very sensitive 'device'.


It depends what you mean by "working at more than 16-bits".

If you multiply two numbers together, you will normally generate more bits (more decimal places, if you like). Programs like Cool Edit Pro don't just throw these bits away or round them off - they dither back down to 16-bits by default.

You can convert to floating point, perform the operation, and dither back to 16-bits if you want - but that's pretty much what Cool Edit Pro is doing anyway when "working at 16-bits".

Further, it's the very low level (ambience and such like) signals which are brought into prominence, and which in the original waveform are quantized with too few bits to sound realistic.

It's got nothing to do with "sounding realistic".

With correct dither, it's about noise - pure and simple.

If can be just four bits - it will sound perfectly "realistic", but with bucket loads of noise on top!

Cheers,
David.

'Normalization' of PCM audio - subjectively benign?

Reply #27
It depends what you mean by "working at more than 16-bits".

You can convert to floating point, perform the operation, and dither back to 16-bits if you want - but that's pretty much what Cool Edit Pro is doing anyway when "working at 16-bits".

Converting to 24 or 32-float might help, I'm not sure. I would definitely rather work with a 'native' 24 bit recording (which is what I'm doing)
It's got nothing to do with "sounding realistic".

With correct dither, it's about noise - pure and simple.

If can be just four bits - it will sound perfectly "realistic", but with bucket loads of noise on top!


It has everything to do with 'realism'. 16/44  typically introduces several percent quantization distortion below -70dB (typically, played back through a good, linear DAC it reaches 6-10 % by -80dB, and more distortion than signal by -100dB).

Quantization (randomly distributed) distortion is completely unlike 'harmonic' distortion produced in the analogue domain, which can reach 10% or more and still sound like 'colouration' - i.e benign.

Were one to inflict these levels of quantization-distortion (say, 5%) across the entire dynaimic range of a music recording, the result wouldn't just sound bad, it would be unlistenable, quite literally.

R.

'Normalization' of PCM audio - subjectively benign?

Reply #28
Just to put some hard numbers down on the signal-to-noise ratio of vinyl vs CD.

I took a recording of the Hi Fi News test record, with one of its +15db, 300hz tracks. (It was one of Andy's, actually.) I made a large (5 million point) FFT amplitude spectrum plot, and compared the 300hz peak amplitude with an eyeballed average noise amplitude around the peak. It comes out to be 80-84db. Now, you could probably eke another 3db out of this if your cart was able to track a +18db tone. Very few can. And this SNR number is compromised quite a bit by the speed variation of the table, which spreads the power of the test tone out by quite a bit. You might get 10-20db back in your SNR if you manage to factor that back in. Oh, and there's more background noise at 300hz due to rumble and hiss and what not, and the observed background noise at around 10khz is lower by about 16db, so let's just pretend that we're looking at the 10khz noise floor in this recording, rather than the 300hz noise floor. So for the sake of argument, let's call the measured vinyl dynamic range to be 123db. (84 db measured + 3db from not testing the complete headroom of vinyl + 20db expected from eliminating speed variation issues + 16db for a more ideal noise floor measurement)

Note that most people would call that a very, very generous figure.

In comparison: I made a wav file in Audacity composed of a 300hz tone of amplitude 1.0 I exported it to a 16 bit 44.1 wav with noise-shaped dither.I did another large FFT of the result. The peak-to-average ratio came out to.. drumroll... 151db. No, that is not a typo. A 16-bit WAV file is perfectly capable of 151db of dynamic range.

Conclusion: 16-bit recording has a noise density that is at least 28db lower than vinyl.

'Normalization' of PCM audio - subjectively benign?

Reply #29
But even normalization involves multiplying each sample by an arbitrary integer without regard to their values relative to each other, and rounding errors will certainly effect the resulting waveform and it's sound - I've heard this definitively. These tiny changes in the relative values of the samples might seem inconsequential in theory, but the ear/brain is a very sensitive 'device'.

So, just to make sure I understand your position on this....

Suppose we have a 24 bit recording of a vinyl LP. Consider two possible normalisation methods:
1. Normalise at 24 bit resolution, then dither down to 16 bit for playback.
2. Dither down to 16 bit, then normalise at 16 bit resolution.
I take it you maintain there will be an audible difference between the two. Have I understood your position correctly?

'Normalization' of PCM audio - subjectively benign?

Reply #30
So, just to make sure I understand your position on this....

Suppose we have a 24 bit recording of a vinyl LP. Consider two possible normalisation methods:
1. Normalise at 24 bit resolution, then dither down to 16 bit for playback.
2. Dither down to 16 bit, then normalise at 16 bit resolution.
I take it you maintain there will be an audible difference between the two. Have I understood your position correctly?


Yes. To be exact, the normalization would be done at 32-float.

There will definitely be a clear mathematical difference between files created with the two different methods, and yes, an audible one. To what degree depends how much normalization is applied. With my particlular setup this can be +16dB or more.

Converting first needlessly discards useful resolution, and normalizing in 16-bits is far less precise than in 24.

So, carry out the more complex operation first (sample multiplcation) in high-resolution (much bigger numbers for the machine to work with), then the simpler process (truncation + dither).

Whatever, I will not apply any DSPs in 16-bit if it can possibly avoided.

'Normalization' of PCM audio - subjectively benign?

Reply #31
There will definitely be a clear mathematical difference between files created with the two different methods, and yes, an audible one.

OK, ENOUGH! 

You're violating TOS #8.


'Normalization' of PCM audio - subjectively benign?

Reply #33
Quote
2. Dither down to 16 bit, then normalise at 16 bit resolution

please,don't do that,read the post #208 here: http://www.hydrogenaudio.org/forums/index....134&st=200#

Ok, before we start listening with our eyes , there is the "Dither Transform Results (increases dynamic range)" setting.  Was/is it checked?

'Normalization' of PCM audio - subjectively benign?

Reply #34
There will definitely be a clear mathematical difference between files created with the two different methods, and yes, an audible one.

OK, ENOUGH! 

You're violating TOS #8.


oooOOOoo!

edit >> well having read the TOS in question, I don't know what you mean.

Or you don't.

'Normalization' of PCM audio - subjectively benign?

Reply #35
Enough is right! This is getting too pedantic and argumentative.

If RockFan is right, then the quantization noise from the operations would be audible - with a blind test. And I'll give the benefit of the doubt here, because it is, perhaps, quite reasonable to listen to classical music that is highly amplified - 60db or more - to catch the end of some ambience or instrument resonance as it fades to the background noise. I've done it before. guru used a similar use case to poke holes in MPC's transparency. It's unlikely, but not completely outside the realm of possiblity, to catch the quantization error in such a situation in an ABX test.

What I'd say is to take a very quiet music selection, amplify it to full scale at both 24-bit and 16-bit resolution, then quantize the 24-bit one to 16-bit. I would also strongly suggest highpassing at 40-50hz, at 24-bit resolution, to knock out the rumble in order to get higher gain during the normalization. (Or just crank your volume up really loud.) Then ABX the 24-bit-processed stuff to the 16-bit-processed stuff for 32 trials.

I would offer to prepare the samples myself from my own classical LPs but I'm going on vacation for a week starting this afternoon.

'Normalization' of PCM audio - subjectively benign?

Reply #36
Enough is right! This is getting too pedantic and argumentative.

If RockFan is right, then the quantization noise from the operations would be audible - with a blind test. And I'll give the benefit of the doubt here, because it is, perhaps, quite reasonable to listen to classical music that is highly amplified - 60db or more - to catch the end of some ambience or instrument resonance as it fades to the background noise. I've done it before. guru used a similar use case to poke holes in MPC's transparency. It's unlikely, but not completely outside the realm of possiblity, to catch the quantization error in such a situation in an ABX test.

What I'd say is to take a very quiet music selection, amplify it to full scale at both 24-bit and 16-bit resolution, then quantize the 24-bit one to 16-bit. I would also strongly suggest highpassing at 40-50hz, at 24-bit resolution, to knock out the rumble in order to get higher gain during the normalization. (Or just crank your volume up really loud.) Then ABX the 24-bit-processed stuff to the 16-bit-processed stuff for 32 trials.

I would offer to prepare the samples myself from my own classical LPs but I'm going on vacation for a week starting this afternoon.

A perfectly reasonable proposal, if one really feels that these issues need to be 'proved' via ABX.

But I don't really understand anyone taking issue with my stating that;

1) if substantial normalization is needed with an 'under-recorded' 24-bit file, carrying it out at 24-bits will result in better use of the resolution of eventual 16-bit file.

and;

2) that any and all DSPs (including normalisation) are better applied while in 24 bits.

'Normalization' of PCM audio - subjectively benign?

Reply #37
Quote
Ok, before we start listening with our eyes
use your eyes again(to read now) and look this detail from my post:
Quote
...i can host the sample 44.1k-16bit used for test or you can use some other extracted from cda,trust me,you will hear the noise.


Quote
there is the "Dither Transform Results (increases dynamic range)" setting. Was/is it checked?
...yes...whant one screenshot too?
do you get better results if uncheck it?

regards


 

'Normalization' of PCM audio - subjectively benign?

Reply #39
I'll leave you audiophiles alone now.


sorry,poor english here.i don't understood 
but i'm sure you're kiddin!

'Normalization' of PCM audio - subjectively benign?

Reply #40
A perfectly reasonable proposal, if one really feels that these issues need to be 'proved' via ABX.

I was going to propose just such an experiment once RockFan had confirmed that I had correctly understood his position, but Axon beat me to it. Does anyone want to go to the trouble of an ABX test? It's really not that important in the grand scheme of things. And as RockFan says....

But I don't really understand anyone taking issue with my stating that;
1) if substantial normalization is needed with an 'under-recorded' 24-bit file, carrying it out at 24-bits will result in better use of the resolution of eventual 16-bit file.
and;
2) that any and all DSPs (including normalisation) are better applied while in 24 bits.

I certainly don't take issue with that. Obviously doing it at 24 bit cannot be worse than 16 bit, so you've nothing to lose.

All I ever questioned was whether there is an audible difference in the context of doing LP transfers. You believe there is, I don't. We can both be happy with our positions. And when you're talking about LP transfers, all this stuff about quantisation noise is like arguing about angels on the head of a pin compared to the *really* important stuff, like getting the LP properly clean, using a good turntable and phono preamp, etc.

'Normalization' of PCM audio - subjectively benign?

Reply #41
All I ever questioned was whether there is an audible difference in the context of doing LP transfers. You believe there is, I don't. We can both be happy with our positions. And when you're talking about LP transfers, all this stuff about quantisation noise is like arguing about angels on the head of a pin compared to the *really* important stuff, like getting the LP properly clean, using a good turntable and phono preamp, etc.


I don't think we're really disagreeing fundamentally here, except perhaps on the overall 'wisdom' of using DSP's on 16/44 PCM.

Possibly whats's been lost sight of is that I need to use *substantial* amounts of digital gain on some recordings due to the level-matchng issues I described. There is simply no reason for me to throw out all those bits before I do it.

If I feel the inclination, I might try a few DSP's such as HF boost to a recording sometime, in 24 and 16 bits, and see if the final 16/44 files (or CDRs) can be ABX'd.

Generally speaking, my motto is "if it's worth doing, it's worth overdoing".

'Normalization' of PCM audio - subjectively benign?

Reply #42
Possibly whats's been lost sight of is that I need to use *substantial* amounts of digital gain on some recordings due to the level-matchng issues I described. There is simply no reason for me to throw out all those bits before I do it.

No argument with that. Indeed, if you're suffering from very low recording levels, then it's a good thing you *are* recording at 24 bit. For my part, since I am able to achieve decent recording levels, and since my so-called 24 bit soundcard (an M-Audio AP2496) has a noise floor that makes it effectively an 18/19 bit card, I am happy to continue working at 16 bit.

If I feel the inclination, I might try a few DSP's such as HF boost to a recording sometime, in 24 and 16 bits, and see if the final 16/44 files (or CDRs) can be ABX'd.

FWIW, I did just try an experiment. Recorded at a deliberately low peak level (-18dB) at 24 bit resolution (although of course the bottom 5 or 6 bits on the AP2496 are pretty much random). Then prepared two files:
1. Normalised at 24 bit, then dithered to 16.
2. Dithered to 16, then normalised.
Loaded them up into Foobar2000 and ABX'd (or rather, failed to ABX) them.

'Normalization' of PCM audio - subjectively benign?

Reply #43
I have to say, as something of a vinyl die-hard, I've always viewed the 'dynamic range' attributed to CD to be very generous.

Typical distortion through a top-notch DAC is something like this;

-60dB - 0.22%
-70dB - 3%
-80dB - 8%
-90dB - 30+%
-100dB - distortion = signal

The nature of this distortion is little discussed, but it needs to be understood that it is not like the evenly distributed harmonic distortion (typically mostly 2nd, some 3rd a little 4th and so on) that predominates in the analogue domain - it is randomly distributed 'quantization noise' and is extremely obnoxious at levels over a fraction of a percent.

It passes 'unnoticed' because it is only affecting low level stuff like ambience and reverb.

If the entire signal were afflicted with this distortion at approaching 1% it would be rather unpleasent to listen to. At anything over 3% it would be unlistenable.

I am compelled to wonder, then,  how it came to be that signals distorted to the the extent they are below -70dB are included in CD's 'dynamic range' numbers. More properly, they are actually it's 'signal to noise'.

Conversely, vinyl disc is damned for it's 'signal to noise' of <70dB, when in truth it has a 'dynamic range' considerably better than this.

Which is the better music carrier? You decide!

edit >> we could also discuss CD's notional 'bandwidth' or 'time-domain resolution'. Perhaps not.

'Normalization' of PCM audio - subjectively benign?

Reply #44
Since the forum has apparently been offline much of the day, some of what I wrote this morning may be a bit outdated but I think not too much, so here it is. As far as the latest posts go, from where come these amazingly high distortion numbers? They seem exceedingly unlikely and very inconsistent with measured data.

I want to point out a few aspects of the frequency analysis graph vs RMS measurements, just in case they are not clear to everyone. The graphs are misleading only if one doesn't know what they mean. The frequency spectrum is divided into some number of ranges, the frequency windows. The total noise is the sum of noise (or audio signal, depending on what one is measuring) in all those windows. As one changes the FFT for the graph, the graph trace moves up or down accordingly. Increase the FFT, thus the number of divisions being considered, and the dB reading at any given frequency is now lower; decrease the FFT and the reading is higher. If one adds up all those window values he will arrive at the same numbers, giving the same results, as the RMS calculations.

***
I can't know what any particular individual hears but I do know what  has been demonstrated many times under controlled conditions: people often hear what they believe they will hear, in spite of the fact that what they believe they hear isn't even present.

Many people prefer to go on believing their illusions, but illusions are what they often are. I've gone through the experience myself enough times to know this isn't simply nay saying about human abilities. Much the same applies to vision and probably to the other senses.

In this thread
http://www.audiomastersforum.net/amforum/v...opic.php?t=5455
I made arguments that dithering was often irrelevant, even when operating at 16 bits. Dither really only effects very low level signals (in any way that can be perceived). I got a lot of strongly worded counter arguments saying I was wrong, that not dithering transforms produced audibly bad results.

To verify my belief to myself I extracted a reasonably dynamic DDD track from a commercial CD. It contains a fade to a very low level. Its peak value is a little under -4dB. I applied the following three transforms in CoolEdit (at 16 bit), once with dithering transforms engaged, then again with no dithering.

(1) normalize to 100% -- should effect all samples
(2) hard limit to -3dB, Boost Input by 0dB -- should effect only a minority of samples
(3) normalize to my usual 97% -- should effect all samples

The audible results between the two seemed subtle but present. Hardly what I would expect from other people's arguments, but still there -- or so it seemed. Loading the samples I was comparing into WinABX showed me that I could not in fact distinguish between the two treatments.

I expected this for most of the track but I understand the theoretical benefits of dithering at very low levels. I though I might be able to hear dither vs no dither on the fade out, if nowhere else. More ‘objective' comparisons showed otherwise.

I received some half -hearted excuses as to perhaps why not. I issued a challenge for anyone to point out any music samples on which I could repeat the experiment and detect a difference in the results. No one seemed to be able to come up with anything.

So I'll issue the same challenge here.  If you think there is a difference that can stand up to testing, show me. I'm reasonably sure my equipment is capable. Maybe I'm not. Produce results that show you are.

Making level adjustments in the analogue domain, to match a pre-normalized track with a normalized one, is confounding and difficult. Therefore I propose the following alternative. Normalize at 16 bits. You say it makes an audible difference, I say it does not make a difference relative to using an analogue volume control (i.e. no difference that can be heard, only a difference that can be calculated).

Now amplify by the necessary negative amount to bring the level back to the original value. You thus have a track that can be legitimately compared to the original in WinABX, PCABX, or any other ABX testing program. You have done two transforms at 16 bits, thus increasing the possibility of detecting the deterioration you believe you can hear. You can dither the transforms or not, whichever you think increases your probability of success.

I know results have been demonstrated with pure tones near the 16 bit lower limit. Here we are considering real recorded music.

'Normalization' of PCM audio - subjectively benign?

Reply #45

It's got nothing to do with "sounding realistic".

With correct dither, it's about noise - pure and simple.

If can be just four bits - it will sound perfectly "realistic", but with bucket loads of noise on top!


It has everything to do with 'realism'. 16/44  typically introduces several percent quantization distortion below -70dB (typically, played back through a good, linear DAC it reaches 6-10 % by -80dB, and more distortion than signal by -100dB).


I have to say, as something of a vinyl die-hard, I've always viewed the 'dynamic range' attributed to CD to be very generous.

Typical distortion through a top-notch DAC is something like this;

-60dB - 0.22%
-70dB - 3%
-80dB - 8%
-90dB - 30+%
-100dB - distortion = signal



It doesn't matter how many times you say this, or how many times some idiot audiofools say it on other boards, it doesn't make it true!

The truth is very simple...

Correct dither prevents quantisation distortion and replaces it with benign "uncorrelated" noise, below which the original signal is infinitely resolvable (as far as the noise can be cancelled / averaged away / "heard through")

From your quotes, I don't think you understand dither. There are some fairly bad explanations on the web, but the principle is simple: add "correct" random noise before the rounding or truncation stage. This makes the truncation a stochastic, rather than deterministic process. This means that, rather than always being rounded to the nearest 16-bit value (or truncated to the one below), a given sample value could be rounded up or down depending on the amplitude of the noise at that instant - and with "correct" noise, the probability of being rounded up vs the probability of being rounded down is directly proportional to the amplitude of the original sample value between the two nearest possible 16-bit sample values.

To put it simply in decimal, an original value of 2.9 has ten times more chance of being rounded to 3.0 that to 2.0, but it can go either way. Without dither, it would always end up as 3.0, and at the limit a sine wave always looks like a square wave. With dither, a sine wave looks like a weird noisy square-ish wave, but sounds like a sine wave plus "uncorrelated" noise. That is because, to have any appreciate of "frequency" (never mind the actual psychoacoustics of how human ears work) you have to look across time. Looking at the dithered output across time, it is a sine wave plus noise. You started with a sine wave, you added noise, this gave you a sine wave plus noise. Then you truncated, but you still have a sine wave plus noise!

(To the really smart people: I know I'm just explaining rectangular dither, while triangular is optimal. To the really really smart people: I know the noise isn't genuinely uncorrelated, but its uncorrelated to its second moment with triangular dither, which Lipshitz and Vanderkooy seem to regard as sufficient - who am I to argue?)

Quote
The nature of this distortion is little discussed, but it needs to be understood that it is not like the evenly distributed harmonic distortion (typically mostly 2nd, some 3rd a little 4th and so on) that predominates in the analogue domain - it is randomly distributed 'quantization noise' and is extremely obnoxious at levels over a fraction of a percent.


Without dither, quantisation noise is typically largely harmonic, with the caveat that, being generated in the sampled domain, it will alias above fs/2.

Here are some pictures from Cool Edit Pro.

[attachment=2493:attachment]

Without dither, the -90dB FS sine wave has 50% harmonic distortion at 16-bits.

With dither, the harmonic distortion is absent, and there is broadband noise instead.

With noise shaped dither, the level of this noise in the most sensitive region of hearing is lowered, thus increasing the perceived signal to noise ratio.


My claim about 4-bit being distortion-free wasn't an idle boast. I've tried it.

I've even put a 6-bit example on line here...

http://mp3decoders.mp3-tech.org/24bit2.html

Scroll down to "To dither, or not to dither?" and have a listen.

What do you think?

(the noise doesn't sound quite right because I mp3 encoded the result to post it to the web - you can try the experiment yourself and listen to the pure linear PCM output using Cool Edit Pro/Audition)


Quote
edit >> we could also discuss CD's notional 'bandwidth' or 'time-domain resolution'. Perhaps not.


We have done so many times before. The threads are in the FAQ.

I won't say any more (like scream TOS 8) because I'm trying to be constructive, but I'm surprised we haven't had a moderator in here!

I hope this post is helpful.

Cheers,
David.

'Normalization' of PCM audio - subjectively benign?

Reply #46
Quote
What do you think?

http://www.hydrogenaudio.org/forums/index....ost&id=2493

that dither is one "spray of white noises" over the whole audio and is audible,even giving a general better result in the audio.
this benefit is good for people that can't here this noise because we
Quote
can't know what any particular individual hears

the link posted by AndyH-ha from "some idiot audiofools say it on other boards"(as you wrote)is showing that clearly.

anyone still have doubts?


regards!

'Normalization' of PCM audio - subjectively benign?

Reply #47

Typical distortion through a top-notch DAC is something like this;

-60dB - 0.22%
-70dB - 3%
-80dB - 8%
-90dB - 30+%
-100dB - distortion = signal


It doesn't matter how many times you say this, or how many times some idiot audiofools say it on other boards, it doesn't make it true!


What? Those numbers are typical of measured results from a good DAC.  You do understand that? measurements at the output of a DAC, not digital-domain analysis?

You seem to be one these people that take exception to suggestions that there are any shortcomings in digital audio (and CD in particular) at all, that it's anything but flawless (just like Phillips said it was back in '83), even advocacy of analogue and vinyl disc, to the point of ready and childish name-calling like "audiofool".

If you're happy with CD, I'm not going to take it way from you (although Sony/Phillips might), unlike what befell those who wished to continue buying LPs 15 or so years ago.

The problem is, you see, a not inconsiderable number of people actually would like to be able to choose which format they buy their music on, including vinyl disc, and the obduracy of views like yours don't help very much.

Analogue does have particular virtues (along with it's intractable flaws, of course) in it's reproduction of music and CD is far from "perfect" , however often and loudly militant digiphiles insist otherwise.

And BTW matey, speaking of TOS, you'll probably find the use of insults, even if you think you're being clever and using them obliquely, is also a breech of them.

'Normalization' of PCM audio - subjectively benign?

Reply #48
And BTW matey, you'll probably find the use of insults, even if you think you're being clever and using them obliquely, is in breech of the TOS.

Then I guess you two are even.

'Normalization' of PCM audio - subjectively benign?

Reply #49
Quote
I'll leave you audiophiles alone now.

you break your promisse.