Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AES 2009 Audio Myths Workshop (Read 171337 times) previous topic - next topic
0 Members and 6 Guests are viewing this topic.

AES 2009 Audio Myths Workshop

Reply #200
But I'm wondering.  You seem to be making a very sweeping and absolute statement here.  The way I'm reading this, you aren't saying anything about audibility or "as far as is reasonably necessary", you're saying flat-out that they are linear, period.  Is that a correct reading of your intent?

Yes. Mathematicians are in it for the sweeping and absolute statements. The math doesn't lie (I've shown you the proof) and the processor does the math correctly. The only place audibility is involved is in choosing sample rate (bandwidth) and bit resolution (S/N ratio).

But errors can creep into the math because computational systems are not perfect and additional errors can be intriduced in the conversion process, which is also not perfect.

So while you may be correct in theory reality may differ. This is an ongoing problem when trying to discuss audio with mathemeticians.......

AES 2009 Audio Myths Workshop

Reply #201
Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...

 

AES 2009 Audio Myths Workshop

Reply #202
...Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.


It is more like the opposite of what you said.

AES 2009 Audio Myths Workshop

Reply #203
...Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different things in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.


It is more like the opposite of what you said.

Really?

So the creation of an audio work, assembled from a number of different sources, some of which are generated by a microphone (which does not pick up sound in an identical way to the human auditory system) picking up the sound created by an acoustic source in ways that are drastically affected by microphone type and placement (no human listener listens with his ear one inch away from a spot on a speaker or drumhead, and sound quality is greatly affected by relative location of the mic to stringed or wind instruments from a distance), and some of which may be purely electronically generated, all of which are then subjected to various forms of electronic processing before being mixed together and re-recorded according to the aesthetic judgment of the engineer and producer - this in what way has anything to do with the accurate reproduction of an original audio source? It's a TOTALLY ARTIFICIAL PERFORMANCE - there is no original source.

If you're recording a classical orchestra, yes, you have an acoustic original, which is why I said audio production entails different things in different circumstances. But even in that case, what you get recorded is not identical to the original performance and never will be, it is still an artistic representation. In this case, perhaps closer to a photograph than an oil painting, but a representation nonetheless.

AES 2009 Audio Myths Workshop

Reply #204
Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...


Uh, that's what it is SUPPOSED to be, Canar.

Not all horses win the race, of course.
-----
J. D. (jj) Johnston

AES 2009 Audio Myths Workshop

Reply #205
Not all analog mixers are op-amp based. Many of the better ones use discrete circuitry. In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps. Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


An op-amp is a design topology and does not in any way imply an IC.  There are discrete op-amps as well as IC ones.  Solid-state feedback power amplifiers use the op-amp topology.  Nobody makes a class A IC op-amp because of power dissipation considerations.  For the discrete case, many class A op-amps exist.  And by putting a DC constant-current load at an IC op-amp output, one can easily make it class A up to a current limit.

AES 2009 Audio Myths Workshop

Reply #206
That's not what I was referring to. In professional audio production certain types of gear are frequently run outside their linear operating area to produce certain effects. This is most common, but by no means limited to, systems that contain electromagnetic components such as transformers and audio tape, which are frequently operated outside their linear area to produce saturation effects that include compression and euphonious harmonic distortion.


The discussion here is supposedly about the "stacking" portion of Ethan's video, in which he's referring to mixers only.  Clipping the summing portion of a mixer should not be a normal state of affairs.

AES 2009 Audio Myths Workshop

Reply #207
I don't understand what you're after, which kind of devices? Especially in the pro audio field, an all digital data path is nothing extra-ordinary.

If by "pro audio" you mean either professional recording of commercial releases  I would beg to differ with you - it is absolutely extra-ordinary. Virtually all productions of this category have at least some analog devices in the signal chain. The only ones that don't are semi-professional dance music productions and soundtracks for commercials that use exclusively virtual instruments and do not feature vocal performances.

In fact, in any case where part or all of the original performance is acoustical in nature (vocals, non-virtual instruments, etc.) an all digital data path is an impossibility.

AES 2009 Audio Myths Workshop

Reply #208
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures

Ok, I read over your Wiki layman's reference but I don't see anything there that would lead one to believe that any existing ADC is, in fact, perfect. In fact I saw  things that would lead me to believe the opposite. ADCs incorporate matrices of resistors and/or capacitors to produce the conversion; they are also driven by some sort of physical clock. Given that no resistor, capacitor, or electronic clock known to man is in fact perfect it would follow that devices incorporating these components are also not perfect. If this is not the case, please enlighten me as to why?

As far as I know, Nothing is perfect - and it's the only thing that is!

AES 2009 Audio Myths Workshop

Reply #209
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures


Every contemporary  hiigh performance audio converter chip that I know of is Sigma-Delta.  Several variations on the basic theme exist.


As far as I know, DSD converters are not Sigma-Delta. Somebody correct me if I'm wrong........

AES 2009 Audio Myths Workshop

Reply #210
I think I may now understand the context that you are discussing this in.  It doesn't happen to match my personal context, but I think I understand the mapping a bit better!

Hopefully your personal context doesn't stray too far away from TOS #8, otherwise this forum requires that you keep it squarely to yourself.

...and yes, there are things in the Audio Myths Workshop video that do not fulfill TOS #8.


Which is why it is so utterly infuriatingly frustrating to attempt to carry on any rational discussion of this topic on this site, and why the discussion is per se biased in Ethan's favor - you allow him to present HIS "illegal" arguments, but the opposition is not allowed to reply in kind. Not fair. No disrespect intended, but I'm tearing my hair out here!


AES 2009 Audio Myths Workshop

Reply #212
first...how can I tell what the snare "should" look like?  Because I can monitor the snare both out in the room, and through console monitoring before it hits the converters.  I can evaluate whether what came from the mic is usable and desirable, and gauge it's general fidelity and/or acceptability (two different things!).


You did not answer how you know what the waveform should look like but with what you have heard. You have either overdriven your ADC, the ADC sucks badly, or you are one of a kind. Level matched, double blind comparisons of halfway decent ADC/DAC combos vs. straight wires usually know only one result: inability to differentiate.

We are both just two random, anonymous guys on the internet. If I would know you in person I would challenge you for $1000 bucks that you're not that one of a kind. I know, you are probably sure, that everything is decent and setup correctly. But I have been there, too - and I swallowed the pill. The brain is a master at changing actual perceptions by context.


Ah, the key word - USUALLY. If what you're attempting to claim were in fact true the word would not be "usually, it would be 'ALWAYS".

And it isn't.

Because some listeners can in fact differentiate some equipment, and it only takes one listener who can consistently differentiate to prove the point that there is an audible difference.

It's not a question of statistics.

AES 2009 Audio Myths Workshop

Reply #213



Linear distortion is any change in the signal that is not level dependent.




This disagrees with the formal definition of linear distortion.

Linear distortion is distortion that does not add any new frequencies to the signal. FM distortion is nonlinear distortion but its effects on the signal can be level-indepdendent.

Quote
Non-linear distortion IS level dependent.


Again false, for the reason I just gave.

Quote
Distortion is, by definition, non-linearity.


False again.

As Wikipedia says: "A distortion is the alteration of the original shape (or other characteristic) of an object, image, sound, waveform or other form of information or representation."  This is as opposed to simply making it larger. I guess it is ironic or maybe a truism that most distortion is an undesirable by-product of changing the size of signals.

Thus there is properly such a thing as linear distortion. A linear distortion changes signal's shape, but does not add any new frequency components to it.

The difference between linear signal processing and nonlinear signal processing is whether or not new frequencies are added to the signal.

Of course, to understand the implications of adding new frequencies to signals it greatly helps to understand that signals themsevles are composed of one or more frequencies. I think many regulars here get this, but some of our visitors don't.


AES 2009 Audio Myths Workshop

Reply #214

Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...


Uh, that's what it is SUPPOSED to be, Canar.

Not all horses win the race, of course.


I agree with the spirit of JE's comment, but things can get a little complex in the actual execution.

For example, a somwhat knowlegable but still naive person might think that an accurate loudspeaker is one that is not only free of audible nonlinear distortion, but  has flat response, linear phase, and omnidirectional disperson.

In fact we know enough about speakers that approach the purported *ideal* of a speaker that is not only free of audible nonlinear distortion, but  has flat response, linear phase, and omnidirectional disperson, that we know that such a speaker is actually pretty nasty to listen to in virtually any real-world listening room.

We furthermore know that the freedom from audible nonlinear distoriton and linear phase are very hard to achieve, but are  generally really good ideas. We also know that within bounds the linear phase is of far lesser importance.

The observed nasty sound from the purported ideal speaker is due to the way that the flat response and omnidirectional disperson interact with just about any room but an anechoic chamber.

In fact the easiest speakers to get along with have carefully shaped disperson.  Furthermore, as you move out into the reverberent field, flat response is also pretty nasty sounding. It is espeically bad in very large rooms.

Earl Geddes says that we are going to be forced to listen to speakers with audible nonlinear distortion for quite some time, so that it is of the essence to learn how to manage it.

The bottom line is that simple definitions of accuracy may themselves be inaccurate.





AES 2009 Audio Myths Workshop

Reply #216
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures
Ok, I read over your Wiki layman's reference but I don't see anything there that would lead one to believe that any existing ADC is, in fact, perfect.
A thing does not have to be perfect to form a perfect idealized model of it. Science is noisy and full of error. However, generally the error behaves according to some model. It is not difficult, for example, to measure the spectrum of the noise floor of a given ADC.

AES 2009 Audio Myths Workshop

Reply #217
Not all analog mixers are op-amp based.


Agreed. However some very highly respected mixers (e.g. classic Neve) have made very heavy use of op amps.

IME, aversion to op amps traces back to the little dust-up we had in the 70s about an obsolete concept called "Slew rate distortion".  The goal posts have moved since then, and we now thow and kick very different balls.

Quote
Many of the better ones use discrete circuitry.


Again, that's probably far more style than substance. The lowest distortion op amps around are probably ICs.  In fact purveryers of discrete op amp replacements are not always forthcoming about how their products perform vis-a-vis the best chips.

Quote
In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps.


Again, there's no logical reason for the obsession with class A amplifiers with regard to signal handling. 

We still use discrete op amps for high power levels. Most if not all modern linear (as oppossed to switchmode) power amps are basically just really big op amps.

Quote
Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


Many op amps are class AB which means that they are  class A when driving high impedance loads. Also, connecting a resistor from the output of an op amp to one of the power supply rails will force the op amp to run class A over a wider range of loads and signals.

I attribute the fascination with class A amplfiiers to a linguistic oddity - class A also means "of the highest caliber" in American English.

As an aside, for a few months this fall I was the unintentional owner of an essentially new (NOS) Pass SA4e which is allegedly a class A power amp. I've listened to it and I've had it on my test bench. I compared it to a Behringer A500 which is in many ways a pretty close comparison. I kept the A500 and sold the SA4e.

AES 2009 Audio Myths Workshop

Reply #218
This disagrees with the formal definition of linear distortion.
We've done this one before, haven't we?

Creating new frequencies is an effect of non-linear processing - it's not the definition.

The definition is really simple: a linear system is one that can be described by a linear equation.

A non-linear system is one that isn't linear.

http://en.wikipedia.org/wiki/Nonlinear_system

In audio, we're conventionally generous in the definition - the addition of uncorrelated noise doesn't count as non-linear.

We're also conventionally rather non-rigorous - there are many things you can do which are tricky or virtually impossible to write as an equation - we don't demand that someone actually writes the equation - we just accept that, whatever it is, it won't be a linear equation, so therefore the system won't be linear.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #219
I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

I don't consider reverb effects in my four parameters because, at heart, reverb is an "external effect" that happens acoustically in enclosed spaces. Yes, it can be emulated by hardware and software devices, so you can still assess frequency response and distortion.
I was thinking about the effect you sometimes get with valve amplifiers, where the metal in the valves sings along with the music. Replace the speakers with an 8 ohm resistor, crank up the volume, put your ears (not too) close to the valves, and you can hear it. Also, if you use the amplifier normally, and tap the valves, you can hear the tapping through the speaker. These two effects together suggest to me that, under certain circumstances, some valve amps will act as little reverb chambers. I've no idea if it's audible. I suspect it could be caught by a frequency response plot, but might just fall within what most people would judge to be an "acceptable deviation" - while in practice it might not be "acceptable" because the "deviation" occurs so long after the original sound.

I wonder too if the effect mightn't be slightly more widespread. Certainly the resonant effects of various speaker materials, fed a signal via digital correction for frequency and phase response, still leave more temporal smearing at "resonant" frequencies than "dead" frequencies. You can see it on the waterfall plot. I'm not sure which category this falls into.

Quote
Quote
put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

I now realize I should have added a disclaimer in my video about lossy compression. My Audiophoolery and Audiophile beliefs articles, on which my video is based, mention excluding lossy compression:
Thank you for the links. I shall try to take the time to read them before responding again, but...

I hope you don't think I'm being too harsh, but this renders the whole exercise a bit meaningless for me. It's turning from "this characterises any audio component" to "this characterises any audio component, except the ones it doesn't". There's a problem: who is to decide which ones it doesn't characterise?

Or to put it another way, I spot a circular argument looming. But I need to read what you've said properly to be sure.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #220



Linear distortion is any change in the signal that is not level dependent.




This disagrees with the formal definition of linear distortion.

Linear distortion is distortion that does not add any new frequencies to the signal. FM distortion is nonlinear distortion but its effects on the signal can be level-indepdendent.

It seems to me that the classification of FM distortion as nonlinear is rather arbitrary.

Your other exception, half-wave rectification, is a special case where the non-linearity occurs only at zero signal level, making it a linear distortion.

AES 2009 Audio Myths Workshop

Reply #221
This is why I list 2 different kinds of distortion, linear and nonlinear.
Can you post your list here textually for clarity please?

I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future.
The answer is generally yes.
OK - noting that you said "generally", so there must be exceptions, what is this list of measurements?


Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?


The random delay can be measured by the usual means for measuring FM or phase distortion. I am unfamilir with lossywave.  However I reject this line of argumentation because it is an intellectual game that sheds littls light on the problems we need to solve in the real world. 

Quote
If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated.


Well Dr. cure yourself. You played that game a number of times in just this post. You made unfounded assertions.

Quote
This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.


Well then leave the tricks, riddles, and unfounded assertions at the door.
You are again adopting an unhelpful and unwarranted style of discussion.

When I wrote my post, I hadn't seen that Ethan had excluded lossy codecs. Given the nature of this discussion forum, and the topic we're now discussing, it was natural to assume that these "audio parameters" should go some way to describing lossy codecs (and anyway, they do!).

My question about lossyWAV isn't "an intellectual game" - it is an audio component that needs to be quantified. In its current design (i.e. without noise shaping; without spectral processing) it is likely that any objective measurements are more "useful" than those you get from, say, mp3. Not entirely useful, but somewhat useful.

In any case, in my mind, what we're trying to get to (you may disagree) is some kind of set of measurements which allow us to say an audio component is "transparent". Do X Y Z measurements, check the results lie within such-and-such a range, and if so, the audio component is "transparent" to human ears under normal (e.g. non-deafening!) use.

When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent. It would be useful (not essential, but useful) if, in the absence of psychoacoustics, audio components that fail the test aren't transparent (in the manner revealed by the tests). It is essential that any audio component which passes the tests is transparent.

Therefore, in answer to the question "is this audio component transparent", false negatives are OK*; false positives are not:

Further, we can only answer the question at all in-as-much as we can make all these measurements.

So it's essential to discover exactly what these measurements are, in what circumstances can we make them, and in what circumstances can we not.

The only other confounding factor I can think of (there may be others) is that there may be so many false negatives (even with traditional equipment) that such tests aren't that useful without human interpretation - the useful pass/fail answers only coming from more complex analysis.

With loudspeakers (which I fear we must exclude completely), we'll probably just find that non are transparent, and it's a tricky judgement call to know which one is "best" - especially between two models which achieve a reasonable frequency response. (I won't even dare to mention the polar response - transducers really are a special case!).

Cheers,
David.

P.S. * =  inevitable with psychoacoustics, unless you use human ear models in the tests - and then people can question the model.

AES 2009 Audio Myths Workshop

Reply #222
This disagrees with the formal definition of linear distortion.
We've done this one before, haven't we?

Creating new frequencies is an effect of non-linear processing - it's not the definition.



The creation or non-creation of new frequecies is the biggest part of the definition of linear and/or nonlinear in a number of audio texts, some of which I've already gvein proper footnotes for.

Quote
The definition is really simple: a linear system is one that can be described by a linear equation.

A non-linear system is one that isn't linear.


The above are known as  circular defintions.

A circular definition is defined as a definition that uses the word being defined as part of the defiinition.


AES 2009 Audio Myths Workshop

Reply #223
A definition doesn't become circular just because two sentences contain the same term.

The definition was indeed very precise. Deriving a to be defined term from an already well defined one, as linear equations are, is not circular.

AES 2009 Audio Myths Workshop

Reply #224
When I wrote my post, I hadn't seen that Ethan had excluded lossy codecs. Given the nature of this discussion forum, and the topic we're now discussing, it was natural to assume that these "audio parameters" should go some way to describing lossy codecs (and anyway, they do!).

My question about lossyWAV isn't "an intellectual game" - it is an audio component that needs to be quantified. In its current design (i.e. without noise shaping; without spectral processing) it is likely that any objective measurements are more "useful" than those you get from, say, mp3. Not entirely useful, but somewhat useful.

In any case, in my mind, what we're trying to get to (you may disagree) is some kind of set of measurements which allow us to say an audio component is "transparent". Do X Y Z measurements, check the results lie within such-and-such a range, and if so, the audio component is "transparent" to human ears under normal (e.g. non-deafening!) use.


That's all fine and good. However, the SOTA of audio measurements is that we have a pretty good understanding about how to characterize the sound quality of a wide range of more traditional audio components, while another range of newer kinds of audio components are far more difficult to deal with. IOW, anybody who wants to conflate lossy encoders and power amplfiiers is ignoring this well-known fact.

Anybody who wants to break into every discussion of traditional audio components and burden it with the new (actually now about 20 years old) problems related to lossy perceptual coding components does so at their own risk. If they make their problems into problems for everybody, then guess what the people who at least have part of their lives in some kind of order are going to do?

Quote
When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent.


The most common situation is the reverse. A component that does perceptually-justified lossy coding will often measure well in accordance with traditional measurements. It may still sound pretty bad.

One of the big problems is that many people don't know what the actual performance requirements are for traditional components. And, they don't know how actual commercial products stack up by those measures.  HA has been swimming in that  in depth for several days.

One rule of thumb is that perceptual components have to first pass the usual bank of traditional measurements.

We have had similar but lesser problems with loudspeakers.  For years people said "How can we stand to listen to loudspeakers that measure to badly, when we have amplifiers we hate that measure so much better?" Then I invented ABX and we found out how much prejudice and misinformation were affecting the general perceptions of audio ampliifer performance.

Of course when people don't keep their audio knowlege up to date and say all sorts of unusual things about simple stuff like linear and non-linear we aren't going to get anywhere. That was all settled over 30 years ago. I documented it here in the past day. Who read my references?