Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Standard specs inadequate? (Read 13322 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Standard specs inadequate?

I've seen claims that the standard metrics of THD and IMD are irrelevant. Obviously, measuring THD at 1 KHz at a nominal volume is insufficient because distortion often increases at low frequencies and high output levels. Especially with gear that uses tubes and transformers. And masking doesn’t hide higher order harmonics farther away from the source frequencies, so the specific makeup of the distortion affects its audibility. But the arguments I’ve seen that dismiss normal distortion tests don't seem compelling to me because they use contrived examples that don’t occur naturally in audio circuits. Am I missing something?

--Ethan
I believe in Truth, Justice, and the Scientific Method

Standard specs inadequate?

Reply #1
One can always plot THD vs. frequency. Especially with speakers you normally see a substantial increase in THD with decreasing frequency (and of course rising SPL). Similarly you could argue that the FR measured at a given SPL is not enough, because if you measure the FR at different levels you can probably see the effects of mechanical limiting (bass roll-off) at high SPL.

I guess what these people are arguing about is that a single number is not enough to "encode" the distortion profile. Some amp, for example, may have a higher 3rd harmonic than 2nd.
Some magazines judge these patterns/spectra of harmonics - seemingly ignoring the (inaudible) total amount.
What I mean is: What's the point in paying more for an amp with a "perfect" harmonic spectrum when the total distortion is too low to be audible in the first place... So, imho it only makes sense to look at the spectrum if the distortion is borderline audible (in other words: the amp is crap).
"I hear it when I see it."

Standard specs inadequate?

Reply #2
I've seen claims that the standard metrics of THD and IMD are irrelevant.

Not if you state them the right way.
Quote
Obviously, measuring THD at 1 KHz at a nominal volume is insufficient because distortion often increases at low frequencies and high output levels.
Yes, that would be the wrong way. The proper way would be "This amplifier can drive each channel ,  simultaneously, continuously, into an 8 ohm load, with up to X amount of power, from 20-20kHz, maintaining < 0.1% THD + Noise."

Standard specs inadequate?

Reply #3
Also see sites like http://www.rane.com/note145.html

Example:
Quote
Correct: THD (5th-order) less than 0.01%, +4 dBu, 20-20 kHz, unity gain

Wrong: THD less than 0.01%


(Wrong because nobody can know what less than 0.01% means.)
"I hear it when I see it."

Standard specs inadequate?

Reply #4
I've seen claims that the standard metrics of THD and IMD are irrelevant. Obviously, measuring THD at 1 KHz at a nominal volume is insufficient because distortion often increases at low frequencies and high output levels. Especially with gear that uses tubes and transformers. And masking doesn't hide higher order harmonics farther away from the source frequencies, so the specific makeup of the distortion affects its audibility. But the arguments I’ve seen that dismiss normal distortion tests don't seem compelling to me because they use contrived examples that don’t occur naturally in audio circuits. Am I missing something?


You mentioned tubes and transformers and a lot of our thoughts about amps still seems to be affected by them. Tube amps usually had power that rolled off somewhat  above 10 KHz and more sharply below 50 Hz which showed up in tests as increased THD at 20 Hz and 20 KHz.  The tubes had to work harder to overcome the greater losses in the transformers.

SS amps are somewhat different. No problem building a power amp module that is flat from as low of a frequency you can think about to 10 KHz.  The power supply may not be able to back it up continuously, but that is a different question. Above 10 KHz most SS amps have an output inductor for stability and reliability that amounts to 0.5 dB loss for 8 ohm resistive loads, 1 dB for 4 ohms, and 2 db for 2 ohms and so on. It takes about 3 dB loss at 20 KHz along with 1 dB loss at 10 KHz to be barely audible for a young critical listener.

And as I've pointed out before, most SS amps have about 0.01-0.02% THD below clipping, and all those specs we see about 0.08% or 0.2% or 10% THD are based on an amp that is clipping at least a little.

The bottom line is that the "power for x% THD at 1 KHz" number, while it should probably not be taken by reviewers on faith, is actually a pretty useful spec in maybe 98% or more of all cases.  All these boogey men that people talk about might be there, but they usually aren't, especially if the x in x% is something like 0.02.

Standard specs inadequate?

Reply #5
Thanks guys, this confirms what I've thought.

I guess what these people are arguing about is that a single number is not enough to "encode" the distortion profile.

No, it's more than that. I've seen people claim that THD is totally irrelevant, though I've never heard a compelling explanation. One guy posted a link to this clip that he insists added only 0.1 percent distortion:

http://web.archive.org/web/20051013110043/...oads/phan09.wav

--Ethan
I believe in Truth, Justice, and the Scientific Method

Standard specs inadequate?

Reply #6
No, it's more than that. I've seen people claim that THD is totally irrelevant, though I've never heard a compelling explanation.

You mean totally irrelevant below some threshold or are we talking about nutcases that dismiss measurements altogether?

Those who dismiss them altogether usually have no clue how these measurements are defined, what 0.01, 0.1, 1, 10% means, some don't even understand the common logarithm, don't know when THD is high enough to become clearly audible (which of course partly depends on the speakers/headphones) or generally have an anti-science closed-minded attitude (but use computers to post on web forums...).
Either that or they have just invested a considerable sum in equipment which measured "badly".


Quote
One guy posted a link to this clip that he insists added only 0.1 percent distortion

Sounds more like 20% and/or some very, very broken algorithm...

Do you have a link to the original sample as well? edit: found it:
http://web.archive.org/web/20041204084340/...ads/phantom.wav


edit 2:


Red is original minus "0.1% THD processed". Whatever they did to the file, they didn't add harmonic distortion but bursts of broadband noise.
The noise is clearly audible, and at some occasions is just 5 dB away from the actual signal..... (in THD percent that would be like over 50%), at very low and high frequencies the noise is even much higher than the signal.
"I hear it when I see it."

Standard specs inadequate?

Reply #7
One guy posted a link to this clip that he insists added only 0.1 percent distortion:

http://web.archive.org/web/20051013110043/...oads/phan09.wav

Could be it is indeed only ".1% distortion", with regards to a normal level signal for that device. What they are failing to mention is that we are listening to a signal that was recorded with the peaks hitting only, say, -90 dB [in a region where that device has lots of problems], and the signal was then greatly amplified for us to actually hear what these problems are since in normal use we'd never know.

Standard specs inadequate?

Reply #8
I've seen claims that the standard metrics of THD and IMD are irrelevant. Obviously, measuring THD at 1 KHz at a nominal volume is insufficient because distortion often increases at low frequencies and high output levels. Especially with gear that uses tubes and transformers. And masking doesn’t hide higher order harmonics farther away from the source frequencies, so the specific makeup of the distortion affects its audibility. But the arguments I’ve seen that dismiss normal distortion tests don't seem compelling to me because they use contrived examples that don’t occur naturally in audio circuits. Am I missing something?

--Ethan


THD is more of an S/N issue than actual distortion. You'll find that on good quality gear, THD and S/N rarely differ by more than 2:1 after you convert dB to percent or vice versa. If you look at the definition of I.M. distortion, it's a much more difficult test. It used to be measured as 60Hz and 7KHz mixed 4:1, filter out the 60Hz component and measure the amplitude modulation of the 7KHz. In a perfect amp the value would be 0. Basically you're using the modulation of the 7KHz as a linearity test which is very revealing. I contend that the phase modulation of the 7KHz should be measured as well, In video, I.M distortion is 'differential gain' and the phase shift is 'differential phase'.


 

Standard specs inadequate?

Reply #9
THD is more of an S/N issue than actual distortion. You'll find that on good quality gear, THD and S/N rarely differ by more than 2:1 after you convert dB to percent or vice versa. If you look at the definition of I.M. distortion, it's a much more difficult test. It used to be measured as 60Hz and 7KHz mixed 4:1, filter out the 60Hz component and measure the amplitude modulation of the 7KHz. In a perfect amp the value would be 0. Basically you're using the modulation of the 7KHz as a linearity test which is very revealing. I contend that the phase modulation of the 7KHz should be measured as well, In video, I.M distortion is 'differential gain' and the phase shift is 'differential phase'.


Nonlinear distortion is based on a model of adding errors to the signal by processing it with a polynomial:  output = input + coefficient 1* input squared +  coefficient 2* input cubed + coefficient 3 * input raised to the fourth power + ....

For a given amount of distortion and a given amplitude (reference some full scale value) there are an infinite number of coefficients that can give rise to the same amount of total error at a point in time.

It is well known that the ear has varying sensitivity to a signal being squared, cubed, raised to the fourth power, whatever. The usual relationship is that higher powers are more audible, which immediately makes sense because higher order terms represent errors that are further away from the input, and thus masked less.

I believe that this is the issue that most well-informed people have when they say that THD is a meaningless number.  One total can represent many different sums of different orders of distortion that have different degrees of audibility.

Read more here: http://www.gedlee.com/distortion_perception.htm

BTW the author has a PhD in an audio related topic and if you read his book he covers this very well and correctly. I made sure of that! ;-) But that's no big deal because this stuff is well known - I can cite a dozen books that cover it the same basic way.

Standard specs inadequate?

Reply #10
Whatever they did to the file, they didn't add harmonic distortion but bursts of broadband noise.

Exactly. I agree that Earl Geddes knows his stuff, and perhaps that's why the Phantom Wave file is no longer on his web site.

--Ethan
I believe in Truth, Justice, and the Scientific Method

Standard specs inadequate?

Reply #11
Whatever they did to the file, they didn't add harmonic distortion but bursts of broadband noise.
To me it sounds and looks like a 5-bit non-dithered recording. At some spots the sound almost/completely mutes at the LSB level. That's quite some distortion.
If a 20 min. LP has a scratch where 9 ticks of 3ms duration are audible, how much distortion would that be ?
Averaging is probably not a good solution in this case.

Standard specs inadequate?

Reply #12
Is it practically possible to characterize (measure) the behaviour of an amplifier in such a way that the model (parameters) contains all perceptually relevant characteristics of the device (with some confidence)?

If, say, a Volterra filter of some order could describe e.g. 98% of the amplifiers out there with sufficient precision that people could not distinguish the real thing from a simulation in an ABX test... Then the problem is only 1)How much effort does it take to train the model (parameters), and 2)How do we present the critical aspects of the model in a comprehensible fashion.

-h

Standard specs inadequate?

Reply #13
Is it practically possible to characterize (measure) the behaviour of an amplifier in such a way that the model (parameters) contains all perceptually relevant characteristics of the device (with some confidence)?

If, say, a Volterra filter of some order could describe e.g. 98% of the amplifiers out there with sufficient precision that people could not distinguish the real thing from a simulation in an ABX test... Then the problem is only 1)How much effort does it take to train the model (parameters), and 2)How do we present the critical aspects of the model in a comprehensible fashion.


A single channel signal can only have linear distortion, nonlinear distortion, and noise added to it.

In this context noise can either be random, pseudo random, or some unrelated signal such as hum or a local radio station.  Noise in this context is any interfering signal that is uncorrelated with the signal.

Linear distortion may be an unfamiliar term to some, but it is just frequency response and phase response variations. Linear distortion is generally time invariant, but it may vary.

Nonlinear distortion can be either AM distortion or FM (or phase) distortion.

The above contains a full description or model of all extant and therefore all perceptually relevant distortion and noise.

We have various traditional schemes for measuring these things.  Our measurement techniques resolve distortion and noise into parameters in the amplitude and frequency domain that can be correlated with reliable perception.  While these parameters and parameter values are not necessarily simply relatable to human perception, they are relatable to human perception.

The relationship can be expressed simply if we do not demand precision. For example, it is generally agreed upon that a component whose noises and distortions are all 100 dB down is guaranteed to be perceptually insignificant or sonically transparent. But this is an overkill number in all cases. The actual thresholds are between 30 and 60 to 80 dB down. However, 100 dB can be achieved in fairly inexpensive equipment. For example the electronics including the converters in a mid priced AVR or a $25 audio interface can achieve -100 dB performance.

The executive summary answer is "yes". It can and has been achieved without resorting to complex or unfamiliar mathematics.  There is an AES paper by the late great  Gene Czerwinski that attempts to apply the Volterra transform to the problem of characterizing the nonlinear performance of audio gear, but it does not appear to have changed how people generally characterize audio gear.

Gene Czerwinski, Alexander Voishvillo, Sergei Alexandrov, Alexander Terekhov, "Multitone testing of sound system components - Some results and conclusions, Part 1: History and theory", JAES, Vol. 49, No. 11, 2001 November, pp. 1011-1048.
"Part 2: Modeling and Application", JAES, Vol. 49, No. 12, 2001 December, pp. 1181-1192.


Standard specs inadequate?

Reply #14
A single channel signal can only have linear distortion, nonlinear distortion, and noise added to it.
...
The relationship can be expressed simply if we do not demand precision. For example, it is generally agreed upon that a component whose noises and distortions are all 100 dB down is guaranteed to be perceptually insignificant or sonically transparent. But this is an overkill number in all cases. The actual thresholds are between 30 and 60 to 80 dB down. However, 100 dB can be achieved in fairly inexpensive equipment. For example the electronics including the converters in a mid priced AVR or a $25 audio interface can achieve -100 dB performance.

But not in loudspeakers, at any price. Or for lossy coding, I guess.

Thus establishing tight(er) limits on audible transparency (or degrees of minor annoyance) seems like a worthwhile goal (although quite possibly hard).

-k

Standard specs inadequate?

Reply #15
A single channel signal can only have linear distortion, nonlinear distortion, and noise added to it.
...
The relationship can be expressed simply if we do not demand precision. For example, it is generally agreed upon that a component whose noises and distortions are all 100 dB down is guaranteed to be perceptually insignificant or sonically transparent. But this is an overkill number in all cases. The actual thresholds are between 30 and 60 to 80 dB down. However, 100 dB can be achieved in fairly inexpensive equipment. For example the electronics including the converters in a mid priced AVR or a $25 audio interface can achieve -100 dB performance.

But not in loudspeakers, at any price. Or for lossy coding, I guess.


I don't know what you mean.  Loudspeakers for sure.  They aren't that different from modern electronics in terms of their processing. The big difference was delays, but now even much modern electronics has delays.  Also, there are no speakers that put all spurious responses 100 dB or more down.  60-70 dB is wonderful for them.

Lossy coders don't generally put all spurious responses 100 dB or more down. Something about 16 bit arithmetic bottoming out before that.

Also, please do not interpret these comments to mean that I think that conventional and traditional technical tests are a valid way to evaluate coders. Listening tests still seem to rule.

Quote
Thus establishing tight(er) limits on audible transparency (or degrees of minor annoyance) seems like a worthwhile goal (although quite possibly hard).


Than 100 dB? no way. the problem with 100 dB is that usually it is at least 20+ dB of overkill, often even more overkill than that.