Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AES 2009 Audio Myths Workshop (Read 171310 times) previous topic - next topic
0 Members and 4 Guests are viewing this topic.

AES 2009 Audio Myths Workshop

Reply #175
btw, I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.


I believe they can be found in this video at 21:34

They are:
  • Frequency Response
  • Distortion - THD, IMD, aliasing "birdies"
  • Noise - hiss, hum & buzz, vinyl crackles
  • Time-based Errors - wow, flutter, jitter


[EDIT] Corrected the time to be more accurate [/EDIT]

AES 2009 Audio Myths Workshop

Reply #176
OK, I'll bite.

It says "the four audio parameters", not "measurements".

I think Frequency response needs to explicitly include amplitude and phase.

I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories. Even so, I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

Far more important IMO is that this list implies an oversimplification that doesn't hold in the real world - just because the effect of any "fault" falls into one of these four categories doesn't mean there are four measurements that can catch any fault. Ethan doesn't say this of course - there are two specific measurements listed under the single category of distortion, for example.

My point is this: we generally use measurements tailored to the specific faults we expect to find - "tailored" both in terms of revealing them, and in terms of giving us data in a domain and form that makes some sense, or reveals something useful. I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future. I doubt it, but I'm fairly sure that any comprehensive attempt to get close will need more than four.


Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated. This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #177
I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.

Me too!
I believe in Truth, Justice, and the Scientific Method

AES 2009 Audio Myths Workshop

Reply #178
I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories.

Yes, that's a good way to put it.

Quote
I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

I don't consider reverb effects in my four parameters because, at heart, reverb is an "external effect" that happens acoustically in enclosed spaces. Yes, it can be emulated by hardware and software devices, so you can still assess frequency response and distortion.

Quote
put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

I now realize I should have added a disclaimer in my video about lossy compression. My Audiophoolery and Audiophile beliefs articles, on which my video is based, mention excluding lossy compression:

Quote
tests have shown repeatedly that most modern gear has a frequency response that's acceptably flat (within a fraction of a dB) over the entire audible range, with noise, distortion, and all other artifacts well below the known threshold of audibility. (This excludes products based on lossy compression such as MP3 players and satellite radio receivers.)

I'll let others who are more expert than me explain what is "left out" in lossy files. (I'll guess it's frequency response that changes dynamically.) But clearly, a delay of any type will fall under time-based errors.

--Ethan
I believe in Truth, Justice, and the Scientific Method

AES 2009 Audio Myths Workshop

Reply #179
D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;
This is the falsest part of your logic.


I guess if you can't support your unsupported assertion with any relevant facts then I don'thave to take the time to debunk it.

AES 2009 Audio Myths Workshop

Reply #180
The first problem here is that the baseline for evaluating digital processing is not zero noise.


but you said there was zero noise. Which is untrue. Then posted five paragraphs to try to dig yourself out.


I said that there was zero added noise.  I can't believe you're holding me responsible for noise that came in the input terminals.

 

AES 2009 Audio Myths Workshop

Reply #181
btw, I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.


I believe they can be found in this video at 21:34

They are:
  • Frequency Response
  • Distortion - THD, IMD, aliasing "birdies"
  • Noise - hiss, hum & buzz, vinyl crackles
  • Time-based Errors - wow, flutter, jitter


[EDIT] Corrected the time to be more accurate [/EDIT]



Well Ethan, since your list of 4 is different from my list of 4, we could discuss *that*.

My opening shot woudl be that your list is incomplete since it doesn't explicitly mention static phase shift, and it splits up nonlinear distortion into two separate catagories.

AES 2009 Audio Myths Workshop

Reply #182
I said that there was zero added noise.


That's also not true. Almost every modification (ex volume changes by multiples of 2) of digital samples add noise. The amount is very low, but not zero. There was a thread recently about exactly how low.

AES 2009 Audio Myths Workshop

Reply #183
your list is incomplete since it doesn't explicitly mention static phase shift

Understand that my "list" is meant mainly as broad categories of the parameters that affect audio reproduction. Static phase shift would fall under time-based errors, since some frequencies exit the output terminals at a different time than other frequencies.

Quote
it splits up nonlinear distortion into two separate catagories.

There are even more than that if we include aliasing and jitter and truncation distortion, in addition to IM and THD. And of course there's overlap, since wherever you have THD you also have IMD (except maybe in contrived cases). But all of these still fall broadly under "distortion" IMO, unless you can think of a better classification.

The next step is probably to devise a list of all the subsets of each parameter. I listed a lot in my video and in my Audiophoolery article, but there are certainly more. For example, I omitted crosstalk as a subset of noise in the original article, so I added that just yesterday. But I'm glad to discuss the broad category headings if you can think of any I missed. Or we can discuss other ways to think of this. If not four broad categories with subsets, what makes more sense? Again, my point is not to list all the specific measurements needed for audio gear, but just to define what parameters affect the sound.

--Ethan
I believe in Truth, Justice, and the Scientific Method

AES 2009 Audio Myths Workshop

Reply #184
I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories.


I agree with that in principle. We can call them categories of faults, or categories of errors.

One of the mistakes that has been made by people who misunderstand Ethan's list is to equlate categories of fault with measurements. The basic misapprehension that these critics have is there need only be one measurement to fully charaterize a given kind of fault, which is not exactly true.

In fact you can measure comon instances of all four kinds of of faults with just one measruement (e.g. multitone).  The reverse is also true - it can take more than one measurement to characterize complex faults.

Quote
Even so, I'm not sure where reverb would fall into this.


If reverb is due to a linear process and it usually is,  then it is a form of linear distortion.  Reverb is usually the result of delaying the signal, possibly filtering it with a linear filter, and then linearlly adding it back to itself. The delay is a special case of phase shift.

Quote
It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.


Reverb does show up in a FR measurment, usualy as some kind of comb filtering effect.

Quote
Far more important IMO is that this list implies an oversimplification that doesn't hold in the real world - just because the effect of any "fault" falls into one of these four categories doesn't mean there are four measurements that can catch any fault.



Quote
Ethan doesn't say this of course - there are two specific measurements listed under the single category of distortion, for example.


This is why I list 2 different kinds of distortion, linear and nonlinear.  I further gave examples of both kinds of distortion.  There are actually two kinds of linear distoriton - amplitude modulation distoriton and frequency modulation distortion. THD and IM measure amplitude modulation nonlinear distortion, while jitter, flutter and wow measure frequency modulation nonlinear distortion.

Quote
My point is this: we generally use measurements tailored to the specific faults we expect to find - "tailored" both in terms of revealing them, and in terms of giving us data in a domain and form that makes some sense, or reveals something useful.


That is more habit and cutom than necessity.  Our ability to analyze signals shot up rapdily when we started doing the analysis with computers.  If you study the more recent literature of audio measruements there have been a number of papers discussing nwere approaches. Papers by Gene Czerwinwski and Richard Cabot come quickly to mind.

Quote
I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future.


The answer is generally yes.  Old relics like THD and IM are artifacts of the days when only very simple equipment was available to generate test signals and analyze them.

A great deal can be determined with no specific test signal at all - there is readily available software that analyzes both linear and nonlinear distortion by automatically developing linear and nonlienar models of the system under test. Mathematically, this is called identification. 

Several programs measure linear transfer functions including SMAART.

The essence of Klippel's speaker distortion measurement system is the mathematical process of parameter indentification by comparing observations the operation of the real system and a model with various test signals.  The software just tunes the parametrs of the model until it works like the real system.

Quote
Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?


The random delay can be measured by the usual means for measuring FM or phase distortion. I am unfamilir with lossywave.  However I reject this line of argumentation because it is an intellectual game that sheds littls light on the problems we need to solve in the real world. 

Quote
If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated.


Well Dr. cure yourself. You played that game a number of times in just this post. You made unfounded assertions.

Quote
This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.


Well then leave the tricks, riddles, and unfounded assertions at the door.

AES 2009 Audio Myths Workshop

Reply #185
your list is incomplete since it doesn't explicitly mention static phase shift

Understand that my "list" is meant mainly as broad categories of the parameters that affect audio reproduction. Static phase shift would fall under time-based errors, since some frequencies exit the output terminals at a different time than other frequencies.


The obvious clieaving point for distortion is linear distortion versus nonlinear distortion.  Linearity is well-defined and understood by many. One way to look at the situaion is to say that distortion is anything that changes the shape of a wave, as opposed to simply changing the  size of the wave which is called amplification and it not distoriton. Linear distoriton is any distortion that obeys the rules of linear functions. Nonlinear distorion is any distoriton that does not obey the rules of linear functions. Short, sweet, and intutive.  It turns out that frequency response curves describe what happens when there is linear distortion. There is always a change to the shape of a wave when it passes through something with nonflat frequency response or tha has any phase shift. However, if you have something that lacks nonlinear distortion, then if you put in twice as much,you get twice as much out.

Another way to divide up linear and nonlinear distortion is to observe that applying linear distoriton to a signal never ever adds any new frequencies to the signal. Liner distortion only changes the amplitude and phase of the sognals that are already there.  Conversely, nonlinear distotion always adds new frequencies - we often call them sidebands or sum or difference tones.

Quote
Quote
it splits up nonlinear distortion into two separate catagories.

There are even more than that if we include aliasing and jitter and truncation distortion, in addition to IM and THD. And of course there's overlap, since wherever you have THD you also have IMD (except maybe in contrived cases). But all of these still fall broadly under "distortion" IMO, unless you can think of a better classification.


No, jitter is just a subset of nonlinear distortion.  It follows the rule of adding new frequencies to the source signal.


Quote
The next step is probably to devise a list of all the subsets of each parameter. I listed a lot in my video and in my Audiophoolery article, but there are certainly more. For example, I omitted crosstalk as a subset of noise in the original article, so I added that just yesterday.


From the stadpoint of the signal that is actually contaminated by crosstalk, crosstalk is one of my interferring signals. It is lsomething ike noise, only it is deterministic.

Quote
But I'm glad to discuss the broad category headings if you can think of any I missed. Or we can discuss other ways to think of this. If not four broad categories with subsets, what makes more sense? Again, my point is not to list all the specific measurements needed for audio gear, but just to define what parameters affect the sound.


It's really a matter of picking the sets and subsets and giving them logical names. In fact this has been going on for decades. If we wanted to, we could just inform ourselves about the existing literature of audio and raise ourselves up on the shoulders of giants.

For example the difference bewteen linear and nonlinear distortion was described in detail in a paper by Pries, back in  1976.

D. Preis, "Linear Distortion," J. Audio. Eng. Soc., vol. 24, pp. 346—367, June 1976.

In turn, this paper has a bibliography going back years and years.

And for a more modern discussion of the same general topic:

Audibility of Linear Distortion with Variations in Sound Pressure Level and Group Delay Geddes, Earl R.; Lee, Lidia W. AES Convention:121 (October 2006)

AES 2009 Audio Myths Workshop

Reply #186
I said that there was zero added noise.


That's also not true.


It is true for a useful but not totally complete range of operations.  This includes some immensely valuable biggies such as storage and transmission of data.

Furthermore, you can do whatever you want and add as little noise as you want by simply increasing the width of the data path for the calculations. 

Quote
Almost every modification (ex volume changes by multiples of 2) of digital samples add noise. The amount is very low, but not zero. There was a thread recently about exactly how low.


Not true for a range of operations including the types of editing that were all that we had with magnetic tape.

Why obsess over 0.00001 dB increases in a noise floor?


AES 2009 Audio Myths Workshop

Reply #187
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.
-----
J. D. (jj) Johnston


AES 2009 Audio Myths Workshop

Reply #189
Furthermore, you can do whatever you want and add as little noise as you want by simply increasing the width of the data path for the calculations.


That's true, but not zero and that's what you have said.

Not true for a range of operations including the types of editing that were all that we had with magnetic tape.


Simple volume changes and mixing already add more than zero noise.

Why obsess over 0.00001 dB increases in a noise floor?


Nobody is obsessed with negligible amounts of added noise. 2bdecided just made a good point, don't feed the trolls by making false generalizations. Small is not equal to zero and there is no need to call it zero, when you are trying to fight off other people for making false generalizations.

I know we both understand the issues very well indeed - you probably even better than me  - but if you're going to say things that are simply untrue, and then write five paragraphs which don't include the words "I wrote the wrong thing" (because you're incapable of every being wrong, even when you are) you're going to give these audiofools a field day. And probably turn HA into r.a.o in the process.

AES 2009 Audio Myths Workshop

Reply #190
dwoz, you seem very eager to take generalizations we are making and try turn them into absolutes. Of course there's crappy hardware you can find where noise floors are going to be audible, or THD is audible, or has fault X. The big problem is that a lot of the alleged differences between hardware evaporates when people actually test it. Furthermore, just because a difference is inaudible doesn't mean it's not measurably different either.



Thank you, Canar.  Whew, that was a lot of work!  I appreciate your patience with me.  This stuff can really get you wrapped around the axle if you're not careful!


Really, is this all you were waiting for? 

That modern hardware within certain functional classes needn't sound different, but might sound different, depending on quality of implementation? 

That it's best to determine such difference with controlled listening tests, rather than assume X and Y sound different?

That measured differences aren't always audible?






AES 2009 Audio Myths Workshop

Reply #191
You're making some dangerous assumptions here.

First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Let's take a specific example - op-amps in an analog mixer.  When they are operated out of their linear region, in clipping, the result is very bad sound.  Op-amps have very low distortion up to clipping because of high feedback.  But as soon as clipping occurs, all bets are off because the clipping is so abrupt.  So clipping an op-amp is a pretty terrible error, but may go unnoticed if it only occurs for a brief instant.

Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec. In reality this is NEVER the case - there is always a tolerance range. NO GIVEN PART EVER EXACTLY MATCHES THE SPEC SHEET.


Not sure where you got that one.  It wasn't from my post, as I said nothing even resembling that.  Spec sheets give a range of values, so there is literally no concept of "EXACTLY MATCHES THE SPEC SHEET".  In the absence of a failed part, they should within the tolerances specified by the spec sheet, provided they are tested in the same way as the spec sheet.

Not all analog mixers are op-amp based. Many of the better ones use discrete circuitry. In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps. Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".

AES 2009 Audio Myths Workshop

Reply #192
So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?

Are you being purposely obtuse or are you really not that smart?

This isn't exactly rocket science you know!

Linear distortion is any change in the signal that is not level dependent.

Non-linear distortion IS level dependent.


Wait a minute.

Distortion is, by definition, non-linearity.

So you've got "linear non-linearity" and "non-linear non-linearity"?

If the non-linearity does not change regardless of level it's linear? Isn't that an oxymoron?

Methinks that what Arnold said was probably not exactly what Arnold meant to say.

AES 2009 Audio Myths Workshop

Reply #193
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.

AES 2009 Audio Myths Workshop

Reply #194
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.



I was way more bothered by the way he ascribed a quasi-normal distribution to deterministic and periodic causation.  my leg is sore!  let go!

AES 2009 Audio Myths Workshop

Reply #195
I'm so impressed by the words you are using. You must be very smart.

AES 2009 Audio Myths Workshop

Reply #196
First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Not really. The actual assumption is that if someone wants to operate all components in a useful signal chain in their linear region, that is generally practical and feasible.

Everybody who understands gain staging understands that merely operating all components in a useful signal chain can take a little planning and skill.  The ability of unskilled or careless people to make big messes can't be understated.

That's not what I was referring to. In professional audio production certain types of gear are frequently run outside their linear operating area to produce certain effects. This is most common, but by no means limited to, systems that contain electromagnetic components such as transformers and audio tape, which are frequently operated outside their linear area to produce saturation effects that include compression and euphonious harmonic distortion.

So, in fact "The actual assumption is that if someone wants to operate all components in a useful signal chain in their linear region, that is generally practical and feasible." is erroneous in the context of real world audio production - that may not give the desired effect.

Audio production is a very different matter than audio reproduction - it is best to bear the differences in mind.

AES 2009 Audio Myths Workshop

Reply #197
I'm so impressed by the words you are using. You must be very smart.



nah. not even close.


If I was smart, I'd be filthy rich, and you'd be flaming one of my personal assistants instead of me.

AES 2009 Audio Myths Workshop

Reply #198
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.


True.
So, let's even give you +- .1 dB from the original. How's that?
-----
J. D. (jj) Johnston

AES 2009 Audio Myths Workshop

Reply #199
As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.

Your thoughts?

If by orthogonal you mean that professional audio equipment must work well and audiophile equipment is used to pick people's pockets then I quite agree. Other than that there should NOT be any great difference between them.


Not exactly.

Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.