HydrogenAudio

Hydrogenaudio Forum => General Audio => Topic started by: hellokeith on 2010-02-07 04:59:07

Title: AES 2009 Audio Myths Workshop
Post by: hellokeith on 2010-02-07 04:59:07
I saw this video linked in the FL Studio (http://flstudio.image-line.com/) forum (http://forum.image-line.com/viewforum.php?f=100).  I say this should be a mandatory watching for anyone frequenting audio / video forums.  I learned more in an hour from these this video than from five years reading audiophile forum arguments.

Audio Myths Workshop (http://www.youtube.com/watch?v=BYTlN6wjcvQ) ~ 60 minutes
At the beginning of the video, JJ talks a little about human hearing, and Poppy gives a very humorous demonstration on the power of suggestion.  The remainder of the video is Ethan Winer giving demonstrations of what can actually be heard with respect to things like dither, eq, resampling, consumer vs prosumer sound cards, microphones, amplifiers, etc.

There is also an excerpt about dither (http://www.youtube.com/watch?v=Sl0U_L3tb_M) ~ 3 minutes, but I'm apparently not understanding what JJ is saying about why we should dither.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-02-07 06:20:03
Audio Myths Workshop (http://www.youtube.com/watch?v=BYTlN6wjcvQ) ~ 60 minutes


That was brilliant! My favourite touch is the little details. eg, at around 17min, a $20000 power cable is shown and its price is displayed in blinking letters. Hilarious!
Title: AES 2009 Audio Myths Workshop
Post by: andy o on 2010-02-07 10:28:54
thanks for this
Title: AES 2009 Audio Myths Workshop
Post by: C.R.Helmrich on 2010-02-07 12:28:14
Cool, thanks for this stuff! Unfortunately, I couldn't join the last AES.

..., but I'm apparently not understanding what JJ is saying about why we should dither.

Does he even say why in these 3 minutes? I just hear him say "You really should always dither. Probably just TPD." I'm partially with JJ. If your recording already contains enough self-noise (from microphones, A/D conversion, or tape hiss), you effectively already have dither. If it doesn't, then yes, you should, because it greatly reduces possible harmonic distortion. See the figures in this article (http://www.hifi-writer.com/he/dvdaudio/dither.htm).

Chris
Title: AES 2009 Audio Myths Workshop
Post by: randal1013 on 2010-02-08 02:24:55
thanks for the link. i'll watch it when i have some time.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-02-08 03:27:57
It's neat putting voices and faces to some of the names I've seen around here. Wonderful presentation!
Title: AES 2009 Audio Myths Workshop
Post by: andy o on 2010-02-08 05:49:28
The Zeppelin presentation is great, even my non-geek friends got a kick out of it.
Title: AES 2009 Audio Myths Workshop
Post by: rpp3po on 2010-02-08 16:06:38
It's neat putting voices and faces to some of the names I've seen around here. Wonderful presentation!


Yes, I share both experience and verdict. When talking about faces don't forget heads. JJ's looks enormous, almost twice the volume of Poppy's it seems. But when you look at his vita, he surely knew to capitalize on that... 

While the presentation was very insightful, (at least for the studio) I don't share the blunt rejection of euphonically distorting gear, just because anything can be simulated once understood. It can, no question, but it requires completely different workflows. The purpose euphonically distorting gear can serve is what could be called 'reified complexity reduction'.

There are creative types who produce fabulous sound through other means than complete rational and reflective penetration of all theoretical concepts involved in their work. You put them in front of Izotope Ozone's EQ alone, with its 9,578097130411812e+52 possible configurations, and it will make them sweat. Give them a couple of boxes they are familiar with, with knobs they can touch, and they will produce wonderful stuff instead.

I don't know, the latter might be a moribund species. But I have younger friends studying electronic composition and production, who sometimes tell me that they can do so much nowadays, that they don't even know what to do anymore. Let alone were to start.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-02-08 17:39:47
While the presentation was very insightful, (at least for the studio) I don't share the blunt rejection of euphonically distorting gear, just because anything can be simulated once understood. It can, no question, but it requires completely different workflows. The purpose euphonically distorting gear can serve is what could be called 'reified complexity reduction'.


The panel was (and is) not unanamous on the idea of rejecting euphony.  Especially at the source end, my position (which was said somewhere) is that you use what you want to get the sound you want. To the extent euphony is part of the desired target, it's simply part of the art, and arguing with personal taste is pointless.

At the playback end, various people (Ethan, Floyd Toole in a different way) argue for accuracy, Floyd with rather more sophisticated criteria, arguing the point that we ought to hear what the content provider intended.

I think we ought to be able to do that, but arguing with the listener's preference seems, well, odd.  Provide them with what you want, and if they choose rationally to add some kind of shaping or distortion for their own listening, that's all there is to it.  My problem with this kind of enthusiast behavior is when a distortion, shaping, or other modification is added and then the difference is claimed to be "more accurate" or "containing more information", or any kind of claim of superiority that goes beyond the individual.

And, one of my points was that SNR itself is not sufficient to know if something is going to have a distorted sound.  Short-term spectrum, at least, is required.
Title: AES 2009 Audio Myths Workshop
Post by: rpp3po on 2010-02-08 20:10:52
You guys should publish a premium audio magazine! High quality journalism, printed on fine paper, and sold at premium prices (up to $50 per issue) is selling fantastically nowadays. People seem to be willing to pay for quality again, despite (or because of) the decline of the mass publishing industry. Sophisticated reviews, DIY speaker and room correction projects, measurement tutorials - 100% focus onto the true weak links. I think you would find your audience. Of course, an impediment could be that you may have better things to do.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-02-08 22:00:46
Especially at the source end, my position (which was said somewhere) is that you use what you want to get the sound you want. To the extent euphony is part of the desired target, it's simply part of the art, and arguing with personal taste is pointless.

That's my position too. My point with "high fidelity" from a recording perspective is you do whatever is needed to get the sound you want. Maybe you'll stick a microphone under the radiator, or run a singer's voice through an old tube guitar amp. Doesn't matter. But once you have the sound you want, then accuracy is what matters most, to preserve that sound intact. I make this point all the time in audio forums. I'll be talking about fidelity only, and some clown will say he likes the distorted sound he gets from pushing his preamp. Fine, but that's not what I was talking about!

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-02-08 22:31:39
I took a look around at the discussions of this video on some other web forums. Then I came back here and took a look at this thread. Not only was it so non-controversial that it wasn't mentioned for weeks, but there's no rabid "debate" about the content therein. Just lovely.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-02-08 22:35:08
I took a look around at the discussions of this video on some other web forums. Then I came back here and took a look at this thread. Not only was it so non-controversial that it wasn't mentioned for weeks, but there's no rabid "debate" about the content therein. Just lovely.


Well, not being welcome at "the asylum" any more, I have no idea what kind of cockamamie codswallop they conjured to commit complete chaos, but I'm not surprised here, it's kinda "preaching to the seminary" except for the lack of the belief requirement...
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-02-08 22:37:36
Of course, an impediment could be that you may have better things to do.


My considered opinion is that I'm way, way too honest to get rich in the audio community.

I would suggest that Dick Pierce, Barry Blesser, FLoyd Toole, Louis Thebault, and Ethan are also in that situation... Just to name a few.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-02-09 19:38:47
I've split analog scott's responses confusing the decisions made at recording-time vs. the decisions made at playback-time here (http://www.hydrogenaudio.org/forums/index.php?showtopic=78583).
Title: AES 2009 Audio Myths Workshop
Post by: hellokeith on 2010-02-10 07:18:40
JJ, can you explain what you meant by we should always dither?
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-02-10 17:48:01
JJ, can you explain what you meant by we should always dither?

I'm sure JJ simply means that dither can never hurt when bit-reducing data, and can only help, so it should always be done since it's free.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-02-10 18:08:48
I think I would further qualify the statement of dither never hurting.

The source material contains some amount of noise already. Sometimes the noise level is so low that you must add dither in order to avoid distortion when reducing the bit depth. Other times the noise level is high enough that it self-dithers, and the added dither increases the noise level negligibly.

There is a middle ground between these two extremes where these is enough noise that dither is not required to avoid distortion, but the noise is low enough that dither increases the total noise measurably. It is only in this very narrow range of inherent noise that dither should not be used. The likelyhood, however, of having just this amount of noise consistently across the entire clip is very small.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-02-10 19:26:16
JJ, can you explain what you meant by we should always dither?

I'm sure JJ simply means that dither can never hurt when bit-reducing data, and can only help, so it should always be done since it's free.

--Ethan



If you're not requantizing, "dither" does not exist.

If you are, it should happen.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-02-10 19:28:06
There is a middle ground between these two extremes where these is enough noise that dither is not required to avoid distortion, but the noise is low enough that dither increases the total noise measurably. It is only in this very narrow range of inherent noise that dither should not be used. The likelyhood, however, of having just this amount of noise consistently across the entire clip is very small.


Then there is still a philosophical question, is that noise part of the original, or not?

I argue that it is, and therefore should be dithered in order to capture the original source noise as faithfully as possible.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-02-11 16:05:57
I think I would further qualify the statement of dither never hurting.

The source material contains some amount of noise already. Sometimes the noise level is so low that you must add dither in order to avoid distortion when reducing the bit depth. Other times the noise level is high enough that it self-dithers, and the added dither increases the noise level negligibly.

There is a middle ground between these two extremes where these is enough noise that dither is not required to avoid distortion, but the noise is low enough that dither increases the total noise measurably. It is only in this very narrow range of inherent noise that dither should not be used. The likelyhood, however, of having just this amount of noise consistently across the entire clip is very small.


+1

The usual case with 16 bits and longer data words, is that the noise in the program material is way higher, a minium of 10 dB, and not uncommonly 40dB or more, than the LSB of the medium.

The history of my involvement with self-dither is a discussion with a well-known audio editor who is also a recordist. He published an article describing the characteristic sound quality of an analog tape that he recorded, as dithered using various dither settings provided by a certain Meridian product as he transcribed the tape to 16 bit digital. He would only distribute a 16 bit sample, and withheld the original 20 bit transcription.

If memory serves, the dynamic range of the recording was about 65 dB which is a pretty typical number for a live acoustic performance. Spectral analysis showed what appeared to be the noise floor of the room ("room tone"), overlaid with some mic/mic preamp noise,  overlaid with noise thqt appeared to be related to analog tape recording/playback. It was difficult to see anything that would be charactristic of the various dithering techniques that were used since they were in the range of 1 LSB, which is to say about 30 dB below the rest. 

This appeared to me to be  strong evidence that the strong audible differences that he claimed to hear probably lacked any physical cause that would be likely to be audible, even under the most ideal listening conditions.

At that point I realized that a quantizer no means for distinguishing the various noise sources summed together at its input. in fact it does not distinguish between signal and noise, but simply processes the bandwidth-limited F(t) regardless of where it comes from.

Needless to say several people whose products had been highly reviewed by said editor were more than happy to vigorously beat me about the head and shoulders for my insight. ;-)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-02-11 16:14:23
Especially at the source end, my position (which was said somewhere) is that you use what you want to get the sound you want. To the extent euphony is part of the desired target, it's simply part of the art, and arguing with personal taste is pointless.

That's my position too. My point with "high fidelity" from a recording perspective is you do whatever is needed to get the sound you want. Maybe you'll stick a microphone under the radiator, or run a singer's voice through an old tube guitar amp. Doesn't matter. But once you have the sound you want, then accuracy is what matters most, to preserve that sound intact. I make this point all the time in audio forums. I'll be talking about fidelity only, and some clown will say he likes the distorted sound he gets from pushing his preamp. Fine, but that's not what I was talking about!


BTW Ethan, is there any way to get a mpeg of the presentation?
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-02-11 21:54:14
The history of my involvement with self-dither is a discussion with a well-known audio editor who is also a recordist ... This appeared to me to be strong evidence that the strong audible differences that he claimed to hear probably lacked any physical cause that would be likely to be audible, even under the most ideal listening conditions.

LOL, indeed, and I know exactly who you mean.

Quote
BTW Ethan, is there any way to get a mpeg of the presentation?

I can mail you a DVD version of what's on YouTube if you email me your address. I can make either a DVD meant to play on a consumer DVD player, or a DVD-R with a 2 GB WMV file. Or do you mean the entire presentation? I do have that, but it's a 36 GB MPEG file. Someone else asked me for that by PM, and I told him to send me a USB drive and I'll copy the file and ship it back. It's way to much work for me to do that any other way.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 13:05:46


Hello!

Non-lurker and first time poster here.

Some of you who go to the Womb forums may recognize my handle.  dwoz here.

SO...

there's been some discussion about us here, I see!  wonderful.  I would like to clear up a few points, or at least offer my view of them, if you all don't mind.

First off, the title of the thread over at the womb, where Ethan Winer's now-famous conjecture about the measurability and audibility of audio and the transparency of audio equipment, is titled "Pathetic".  Now, that might seem like a case of the womb taking a "piss" on Ethan, but for one interesting thing:  That thread title was created and CHOSEN BY ETHAN HIMSELF.  When I first saw the thread, I was sorely tempted to change it to something less incendiary, but I held my hand.

Secondly, It is most certainly Ethan who takes the hard tone.  The man has a distinctly passive-aggressive, denigrating tone in his replies, which is dripping with ad hominem, argument from authority, false association, and a whole slew of other deficiencies that would certainly never pass in person, in polite discourse.  And, because this is the internet, he is of course answered in kind.  Ethan minces no words in describing me and my opinions and thoughts, across the internet.  I afford him the same kindness he shows me.

And yes, at the Womb, we strive to be entertaining, engaging, and can at rare times even accomplish that.  The signal to noise is really quite high.  But that's just my opinion, now on to the salient points.

Ethan has made a number of related conjectures about audio.  It's measurability, how we may define transparency in an audio component, and a few "myths" that he claims to have debunked.

Now, I will say up front that there is nothing wrong with trying to do this...it's a laudable goal.  There is so much nonsense, crap, and outright fraud out there in the audio universe, that it boggles the mind!  Who could possibly be against a white knight riding in and trying to clean some of it up?   

Not me!  Good on you, Ethan, for trying to do that.

However....BAD FORM, Ethan, for being intellectually and technically sloppy in your work.  What could be worse, than a mythbuster that simply substitutes his own myth for the one he's debunking?

Ethan makes a classic and easy mistake in his assumptions, which is easily noticed if you have the intellectual honesty to simply look.  He makes a conjecture about the required measurements to fully and completely describe the fidelity of audio.  According to him, there's four.  Now, when you examine this supposition in the context of a home listening audio system, his conjecture, while incomplete and inaccurate, is nonetheless somewhat useful.

However...when you then try to take that same conjecture,and broadly apply it to the entire audio production process, it stops being simply a benignly-inaccurate useful tool, and becomes an outright falsehood that can have direct consequences to the resultant audio.

When pointed out to him, To his credit, he did re-qualify his statements somewhat.  But not to the community-at-large where he is promulgating his wares.

Let me reiterate:  he has been shown to be factually wrong on every single claim he's made.  He has thus shifted his argument away from absolute statements, to relativistic statements, where he treads in the very subjective waters that you here at hydrogenaudio seem to revile.  Instead of "stacking" of audio artifacts being non-existent, it is now "not audible in today's equipment".  Instead of 32 bit float computer math being "absolutely accurate" it is "accurate and precise to the extent of the implementation and optimization of the algorithm".  Instead of "phase doesn't matter", it is now "phase issues are not audible in today's equipment".  "Jitter is a non-issue" has become "jitter is a non issue in a properly implemented system".

Slippery slope, folks.  He's conceded on the points, now we're just haggling over price.

Now, to his credit...again, most of this stuff doesn't matter to the home listener, because MOST of the issues he's been wrong about, only rear their ugly heads when you set about combining and summing complex signals (such as when building a music mix), and are NOT problems that manifest in the reproduction-side.  For example, phase anomalies in speakers are, in fact, marginally if at all audible.  But a non-linear phase anomaly that exists in an A/D converter, will and does show up as frequency domain errors when signals are summed.  This is not a guess or theory on my part, but long-settled fact. 

So, I guess my diatribe here is about being intellectually rigorous when you try to do something about audiophile nonsense.  Don't simply substitute your own error for the one you're attacking.

As it happens, when James Johnson came over to debate ME about this, at the Womb, he essentially agreed with about 98% of what I said, but for some nitpicks on some incidentals.  He agrees with me. 

While I'm here, I may as well gore one of your sacred cows, and get myself on the "enemies" list.  DBT.  What a wonderfully mis-understood thing she is!  Double blind Tests are a great, wonderful technique.  But employ them within a poorly-designed test, and you will have bad data with high confidence.  Simply having a double blind test does not mean your results will mean ANYTHING.  This is another myth that needs debunking!

One more thing.  It is my firm and unyielding belief that if a signal can exist as a voltage within a system, it can be completely measured.  Therefore there will be no signal for which "my ears tell me something that the measurements can't".  Everything is measurable.  However, it is also my firm and unyielding belief that the measurements required to completely describe what the listener can hear, are a far different set than the ones that Ethan Winer is attempting to promulgate.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: stephanV on 2010-03-19 14:25:55
While I'm here, I may as well gore one of your sacred cows, and get myself on the "enemies" list.

Too bad that discussion of DBT has always been allowed here. You're not a vigilante just yet.

Quote
DBT.  What a wonderfully mis-understood thing she is!  Double blind Tests are a great, wonderful technique.  But employ them within a poorly-designed test, and you will have bad data with high confidence.  Simply having a double blind test does not mean your results will mean ANYTHING.  This is another myth that needs debunking!

Eh? Who is actually arguing here the benefit of poorly designed tests? Or are you trying to set up a strawman?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 14:51:45
While I'm here, I may as well gore one of your sacred cows, and get myself on the "enemies" list.

Too bad that discussion of DBT has always been allowed here. You're not a vigilante just yet.

Quote
DBT.  What a wonderfully mis-understood thing she is!  Double blind Tests are a great, wonderful technique.  But employ them within a poorly-designed test, and you will have bad data with high confidence.  Simply having a double blind test does not mean your results will mean ANYTHING.  This is another myth that needs debunking!

Eh? Who is actually arguing here the benefit of poorly designed tests? Or are you trying to set up a strawman?



I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

It is very common to see inexperienced people trot out DBTs as if they irrevocably prove something in a definitive way.  Ethan would be one of these people.  Just the simple fact that the DBT tool was involved, becomes a kind of self-proving feature of the test, when that is NEVER an assumption you can make.

I'll give an example.  In Ethan Winer's video, he "debunks" the notion that phase matters in the reproduction of audio.  He plays a little clip for us, which is one of those late-60s demo arrangements that were used to introduce "stereo" to the market.  The program material has bits and stuff flying all over the stereo field, with dramatic panning.  Ethan uses this "test" to demonstrate that we can't hear when phase anomalies are introduced.  If I were to redo his example as a proper test, use his program material in a proper DBT, I would indeed display a lack of ability to discern the phase anomalies reliably.  In his test, he would (he does) declare therefore that phase just isn't a problem.  But that's an example of a poor test that has high confidence.  Why?  because phase defects in that context manifest themselves primarily as spatial differences.  His test design, using a proper DBT, has resulted in the conclusion that phase doesn't matter.  But the REAL result of his test design is that the DBT definitively proves not that phase doesn't matter, but that program material with active panning will MASK any perceivable differences due to phase error!

So what he's REALLY proven is that phase doesn't matter if I have a lot of active panning going on.  He has improperly generalized his result.

This is an example of a DBT used to pump the bona fides of a flawed test and flawed result.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: stephanV on 2010-03-19 15:30:56
I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

I think you need to look up the definition of a strawman.

Quote
Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

This not a problem of DBT alone, this is a problem of science in general. It also not an objection to the DBT protocol as such.

HA does not the support the notion that, just because something is done by a DBT test, the data is irrefutable.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 16:01:26
I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

I think you need to look up the definition of a strawman.

Quote
Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

This not a problem of DBT alone, this is a problem of science in general. It also not an objection to the DBT protocol as such.

HA does not the support the notion that, just because something is done by a DBT test, the data is irrefutable.



Just to be safe, I went to the Well of Knowledge, the Wikipedia, and re-read what they have for "straw man".  It seems that I'm guilty of loose paraphrasing.  Both my statements about straw men are accurate for what they are, they just don't represent a particularly expansive definition.    In fact, the wiki definition seems to have been written in honor of Ethan, with him as the exemplar of the phenomenon.

If anything, I have tried very hard to be MORE rigorous in exactly converging on Ethan's specific statements, and have assiduously avoided creating straw men in my debate with him.

It seems we are then in agreement about DBTs....I certainly don't have any objections to them or their validity, and am not inferring that there is any.  My objection is to this deification of them that I see everywhere.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-19 16:37:33
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 16:55:07
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.

Cheers,
David.



Not sure I quite agree.

In many cases, absolutely, yes.

The fact that expectation bias CAN be a factor, in no way demands that it is.  let me rephrase:  "it doesn't guarantee that the test will be any good, but its absence usually guarantees that the peer review is scathing.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-19 17:08:03
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.
Not sure I quite agree.

In many cases, absolutely, yes.

The fact that expectation bias CAN be a factor, in no way demands that it is.
Well, there's one way to make sure, isn't there?

If that's not worth doing, then the test obviously isn't worth doing.

More common is that people claim that "the difference is so obvious that there's no need to ABX" - sometimes true - but it also means that a 16-trial ABX test would be trivial to complete in less than a minute, so in this case the answer would be "stop wasting your time and ours and just get on with it!"

Quote
let me rephrase:  "it doesn't guarantee that the test will be any good, but its absence usually guarantees that the peer review is scathing.
Depends where you publish the test. Without a DBT, it wouldn't get published in a pharmaceutical journal. Whereas the proudly proclaimed absence of an ABX test would probably cause it to be fawned over in Stereophile. For example

Keep doing it here and you'll just get banned.

I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that - though it might help the listener to focus, and sometimes helps the listener to make "lucky" guesses that turn out to be based on something so subtle they thought they were imagining it. This is the exception, rather than the rule, but it happens.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 17:47:56
I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that -
Cheers,
David.


oh, boy...you just stepped on a nail.

ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.
Title: AES 2009 Audio Myths Workshop
Post by: Tahnru on 2010-03-19 18:01:51
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.
Title: AES 2009 Audio Myths Workshop
Post by: stephanV on 2010-03-19 18:07:53
oh, boy...you just stepped on a nail.

ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

An ABX test removes one form of expectation bias influencing the results, whereas for example a sighted test removes none. If you have a better proposal, please continue.

Quote from: mixerman link=msg=0 date=
I realize the answer is somewhat subjective, but I promise you there's an objective purpose to asking the question.

I'll answer 'no'. Let's get to the purpose.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-19 18:28:32
I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that -

oh, boy...you just stepped on a nail.

Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 18:31:13
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.



I'm sorry...I fail to see where the specificity in your distinction matters?  Aren't we ALWAYS talking about results?
Title: AES 2009 Audio Myths Workshop
Post by: Tahnru on 2010-03-19 18:36:55
Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.


This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215 (http://www.hydrogenaudio.org/forums/index.php?showtopic=45432&view=findpost&p=401215)
Title: AES 2009 Audio Myths Workshop
Post by: Mixerman on 2010-03-19 18:46:27
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.


Well now there's a distinction without a difference! My doesn't that nail smart.

Sure. Have it your way. If Ethan is convinced the differences in converter fidelity is "subtle" based on measurements, then why doesn't the expectation bias apply to him where listening confirmation is concerned? He never did an ABX to back up his claims (and he's made so many I don't know where to begin), and he actually claimed HE was the scientific control in his own listening test.

Here is the question I asked Ethan On Gearslutz: "When you did that experiment with the Soundblaster card, and you came to your conclusions (the new conclusion or the old one, doesn't matter), what exactly was your control?"

Here is Ethan's response: (http://www.gearslutz.com/board/5138955-post1191.html) "My control was simply my own assessment that the recorded playback sounded the same as the source."

You guys find that scientific? You guys think it's acceptable that Ethan is his own control in a test that isn't blind?

Personally, I have no problem with evaluating gear in that manner for personal use, but if one is going to at all times hold others up to an ABX standard, then one should hold themelves up to that very same standard. Oh, and their peers as well.

Quote from: mixerman link=msg=0 date=
I realize the answer is somewhat subjective, but I promise you there's an objective purpose to asking the question.

I'll answer 'no'. Let's get to the purpose.


Let's not speak for everyone, now! I'll give it a day and see if anyone is willing to attempt to define a great mix.

Enjoy,

Mixerman
Title: AES 2009 Audio Myths Workshop
Post by: Tahnru on 2010-03-19 18:51:58
I'm sorry...I fail to see where the specificity in your distinction matters?  Aren't we ALWAYS talking about results?


If we're talking about only the results (and not what's going through the head of the person performing the test) then let's talk about the 4 potential results of ABX testing.  Please assume an audio comparison for the purposes of discussion.

1. True negative - An audible difference does not exist, and none was identified by the listener.
2. True positive - An audible difference exists, and was identified by the listener.
3. False negative - An audible difference exists, but the listener failed to identify it.
4. False positive - An audible difference does not exist, but the listener appears to have identified a difference.

Do you have any quibbles so far?
Title: AES 2009 Audio Myths Workshop
Post by: Porcus on 2010-03-19 19:06:11
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.



I'm sorry...I fail to see where the specificity in your distinction matters?  Aren't we ALWAYS talking about results?




"False positive result" = measured a difference that isn't there. For example due to placebo.

"False negative result" = failed to measure a difference that is there.


Since the interesting concept of "difference" is "audible difference", we are not interested in the cases where there are differences which no-one can hear. A "false negative" analogy of placebo could be that the listeners know what two setups they are considering, and fail to notice differences because they believe that there are none.

Say, assume the listeners do not believe in cable differences, and you tell the listeners that the speaker cables are the only thing different; then they might very well fail to spot real "audible differences" -- i.e. differences they might otherwise have heard and identified correctly.



And before hitting reply, read the spoiler:
Did I imply that cables would make audible differences? No, but I do imply that different setups might sound different even if the tester fools the listeners by a damn lie.




Edit:
I need to type quicker than this, I guess ...
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 20:02:20
Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.


This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215 (http://www.hydrogenaudio.org/forums/index.php?showtopic=45432&view=findpost&p=401215)



Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?
Title: AES 2009 Audio Myths Workshop
Post by: Axon on 2010-03-19 20:03:28
Ethan ... makes a conjecture about the required measurements to fully and completely describe the fidelity of audio.  According to him, there's four.
Can we split this thread, and have a discussion about that in a separate thread, here on HA, if anyone else is interested?
I guess no one was interested - which is a shame, because this current thread is becoming more pointless by the minute.

I would second your point except that I have not yet really taken the time to analyze what Ethan has said on the matter in detail. The "conjecture" thing is not new - I noticed it in an article he wrote (skeptic.com?).
Title: AES 2009 Audio Myths Workshop
Post by: shakey_snake on 2010-03-19 20:09:19
ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.
It would really almost be silly for "difference believers" like (presumably) yourself to refuse the chance to volunteer as Ethan's test subjects at this point, wouldn't it?
Yet, I somehow don't think he'd have volunteers queuing up in his front yard, even if he posted fliers.

Instead, we're much more likely to find difference-believers huddled up in foreign corners of online activity singing battle-hymns against any form of blind or double-blind testing. That's more self-fulfilling prophecy in action than anything, is it not?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 20:27:39
ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.
It would really almost be silly for "difference believers" like (presumably) yourself to refuse the chance to volunteer as Ethan's test subjects at this point, wouldn't it?
Yet, I somehow don't think he'd have volunteers queuing up in his front yard, even if he posted fliers.

Instead, we're much more likely to find difference-believers huddled up in foreign corners of online activity singing battle-hymns against any form of blind or double-blind testing. That's more self-fulfilling prophecy in action than anything, is it not?


I wonder who you might be talking about?  I presume you're talking about me, but that doesn't sound like me. 

Being a test subject isn't silly.  Walking into a trap is.  At this point, Ethan is not interested in discovering if his conjecture is true or not, he's interested in avoiding proof that it isn't.  Doing a test designed by him is the same as Barack Obama walking into a GOP-sponsored roundtable on "how much does the Democrat President suck, and could he possibly suck more?"

Title: AES 2009 Audio Myths Workshop
Post by: Axon on 2010-03-19 20:30:23
Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?

Sure - from a statistically valid viewpoint. I'm not sure this is a majority viewpoint on HA, but I think that all personal interpretation of statistically controlled listening tests requires some degree of stepping "out of the box" - outside of statistical science. All properly conducted listening tests have meaning - it's just that some meanings are easier to apply to one's situation than others. Negative results are harder to interpret, but to ignore them entirely is just nuts.

If I perform an ABX test myself, for my own benefit, using the best listening environment available to me, I would be extremely hard-pressed to reasonably dismiss the results of my tests as being invalid without invoking all kinds of ad-hoc claims about how blind testing screws things up. The success or failure of such a test can be almost directly interpreted in terms of my own listening abilities. That same ABX test means less to anybody else, and perhaps far less to some people.

The results involving a group of listeners are obviously open to additional discussion. Nobody's going to seriously claim, for instance, that the blind tests conducted during the development of DAB, resulting in claims of audible transparency in the 128-192k range (right?), are anything except laughable. And even if somebody comes up with a positive ABX result, there may be good reasons to dismiss the result and not worry about the effect under test, if one asserts that one's hearing is simply not good enough to matter. (In my case, I've ABX'd absolute polarity in the past, but ignored it after that, because of the insanely low level of effect and interference from transducers.)

When you have a large group of people tested to yield a negative result in what is essentially a well-run test - and this is the case with Meyer/Moran - you obviously cannot use the statistics to conclude that the effect is inaudible, but you also can't use the statistics to justify that anybody ought to care about it, either. But that's more or less the conclusion a lot of people (falsely) get from such results: that just because a hypothesis is statistically unproven (negative results after numerous controlled tests), means that anybody who believes in the hypothesis, using those results as justification, is nuts. The madness of such a conclusion is a little easier to spot if, instead of high res, we were instead conducting an ABX test of two identical glasses of water that had previously been mixed. Nobody can logically conclude from the negative results of such a test that the water in both glasses was identical, and yet one's logic is not generally called into question for such a belief - that the negative test result is a significant piece of evidence in one's justification of why the glasses are identical.

On a more technical level, I believe that the proportion of discriminators in most controlled audio tests is usually vastly underestimated, and in fact, in many situations, is pretty close to 1.
Title: AES 2009 Audio Myths Workshop
Post by: Tahnru on 2010-03-19 20:35:11
Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?


I recommend reading on the null hypothesis concept.  If I get your meaning (and I'm not sure that I do) you appear to have this backwards.  I certainly did - looking at my previous post from the thread I linked I see that I described the distinction between hypothesis and result incorrectly.  Fortunately enough for me, I did include a link to a null hypothesis wiki article at the bottom (so I didn't look like a TOTAL nabob).

The null hypothesis of an ABX test assumes that no difference will be detected.  It is a successful result that carries with it positive wording.

Null hypothesis - "There exists no audible difference between the items being measured".  If a difference is identified, the null hypothesis is falsified (rendered false) and the opposite statement can be made.

Please help me out with your post above, because I don't follow you for these things:
1. What to you mean by "invalidate a negative assumption"?  Outside of the null hypothesis, there shouldn't be assumptions.
2. "Result is being expressed POSITIVELY" doesn't mesh well in my head with "difference are not audible" - the latter is a negative statement.
3. What precisely are you worried about with respect to "cohort competence"?  (since this is part of my 1st question, it may not be necessary to help me here - once I've seen that part of the discussion)
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-19 21:03:40
you'll see time and again claims that "Ethan was proven wrong" even though nobody actually proved anything of the sort.

Just since I wrote that yesterday, I see many instances of the same accusations without evidence, and of course plenty of completely wrong facts and disingenuous claims:

Ethan start to trash our place, not the opposite, and I can tell you he doesn't say everything about the reasons he has been banned.

Apparently neither do you. Please explain why I was banned from the Womb. Links to relevant posts are always welcome.

the title of the thread over at the womb, where Ethan Winer's now-famous conjecture about the measurability and audibility of audio and the transparency of audio equipment, is titled "Pathetic" ... That thread title was created and CHOSEN BY ETHAN HIMSELF.

More lies. Yes, I used that name for the second thread title, after you "pathetically" locked the first thread once it was clear you could not defend your claims.

Quote
BAD FORM, Ethan, for being intellectually and technically sloppy in your work.  What could be worse, than a mythbuster that simply substitutes his own myth for the one he's debunking? Ethan makes a classic and easy mistake in his assumptions [blah blah blah] he has been shown to be factually wrong on every single claim he's made.

More claims that Ethan is wrong with not one shred of proof.

In Ethan Winer's video, he "debunks" the notion that phase matters in the reproduction of audio.  He plays a little clip for us, which is one of those late-60s demo arrangements that were used to introduce "stereo" to the market.  The program material has bits and stuff flying all over the stereo field, with dramatic panning.

You are ignoring on purpose the solo cello example which played first, and ignoring the fact that both of those demos were to show that phase shift is audible while it's changing and when the shift is different left and right! The demo that shows phase shift is not audible starts at 49:33 into the video, and uses the percussion breakdown from my Tele-Vision video as a source.

Quote
So what he's REALLY proven is that phase doesn't matter if I have a lot of active panning going on.  He has improperly generalized his result.

What you have proven yet again is you are a master of straw man arguing because the examples you cite are not what you say they are.

I have tried very hard to be MORE rigorous in exactly converging on Ethan's specific statements, and have assiduously avoided creating straw men in my debate with him ... the wiki definition [of straw man] seems to have been written in honor of Ethan, with him as the exemplar of the phenomenon.

How can you claim to be "MORE rigorous" when you got it totally backward? My example that you said shows phase shift as inaudible was in fact the demo that shows that it is audible. And you claim I'm the model of straw man arguing.

You've allowed Ethan to claim we moderated him at the Womb because we somehow can't debate him, and here you are moderating me.

The defining difference is that my posts at the Womb address technical issues, or matters of logic, while your posts here seem to be mindless whining about personalities and authority about who knows how to mix an album. Can you really not see the difference?

Ethan is not interested in discovering if his conjecture is true or not, he's interested in avoiding proof that it isn't.

Translated: "Ethan is wrong but I'll be damned if I can show even one example."

Thanks dwoz and MM for making my points for me.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 21:12:26

No, Ethan.    YOU selected that thread title.  I'll go pull server logs and give you the PROOF.

No, Ethan.  You use that section of the video to show that phase shift is ONLY AUDIBLE WHEN IT'S CHANGING, and specifically say it ISN'T when it stops changing.  YOU are taking that out of context, not me.  Oh, and you're wrong as well.  It is QUITE audible in the cello.

RE:Proof.  always, you say "you've not supplied a shred of proof!"  well....not in THIS THREAD.  I've written upwards of 50 pages of direct proof about your claims. 

Oh, and THIS POST OF YOURS is a straw man, by it's very definition.  A classic example.  I've given plenty of examples.  I hardly think the readership here is interested in it, in this thread.  It exists, it passes peer review.

It's really too bad that you botched the job of debunking the audiophiles, because it's an important task.  Now all those idiots get to point out YOU as an example of why all the ABX'ers are missing the point (which they're not).
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-19 21:28:14
I've written upwards of 50 pages of direct proof about your claims.

Excellent, please pare it down to one or two sentences per "Ethan error," and post here for all to comment on.

As for phase shift, you need to review that section of my video. You clearly misunderstood what's being demo'd.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 21:45:31
I've written upwards of 50 pages of direct proof about your claims.

Excellent, please pare it down to one or two sentences per "Ethan error," and post here for all to comment on.

As for phase shift, you need to review that section of my video. You clearly misunderstood what's being demo'd.

--Ethan



perhaps I can invite you to do so?  I believe I did do a digest post or two...maybe I can dig them out of the lists.


The part that concerns me, is that one doesn't even have to go to tests, to disprove your conjectures.  They fall on general theory.  The other thing that concerns me, is that the debunking only required someone with my level of expertise, which is not terribly high.  i.e. if EVEN I can debunk you, then how will you fare against a REAL opponent?  You'll be cut to shreds and fed to the dogs (or audiophiles, those with a taste for gristle). 

What I would actually like to see, is for you to take my points, and offer reasoned, topical rebuttals that show exactly where my position is incorrect.  Not simply "you're wrong, show proof"...  How does Albert Einstein rebut high school senior Albert Feldstein, who thinks he's disproved general relativity?  By deconstructing the challenge in a step-by-step rebuttal.  You have NEVER done that, not once.  How do you support your position, besides saying "I'm just correct, trust me."?

And let's please not let the debate turn into a discussion about the debate?  That's also a place that I have NO INTEREST in going any longer.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: andy o on 2010-03-19 23:09:56
No, Ethan.  You use that section of the video to show that phase shift is ONLY AUDIBLE WHEN IT'S CHANGING, and specifically say it ISN'T when it stops changing.  YOU are taking that out of context, not me.  Oh, and you're wrong as well.  It is QUITE audible in the cello.
I took it as saying that phase change was audible only when it varies in one of the two channels, but not when it varies uniformly.

Quote
Oh, and THIS POST OF YOURS is a straw man, by it's very definition.  A classic example.  I've given plenty of examples.  I hardly think the readership here is interested in it, in this thread.  It exists, it passes peer review.

I'm not sure that you have fully grasped what a straw man argument is.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-19 23:15:30
This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215 (http://www.hydrogenaudio.org/forums/index.php?showtopic=45432&view=findpost&p=401215)

This seems to say that ABX can't be used to demonstrate transparency. The two possible outcomes for ABX are:
The latter should not be confused with "There is supposed to exist reasonable evidence to support the idea that a difference cannnot be noticed."

So where are we? If audiophiles could just get a repeatable positive result on an ABX, they'd win outright (or more likely we'd move on to arguments about what differences are significant). But one should bear in mind that a strict reading of the "...failed to provide evidence..." outcome, no matter how many times it is reached, does not help the objectivist. With so much to gain and nothing to lose, maybe audiophiles can learn to love the ABX.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 23:35:36
No, Ethan.  You use that section of the video to show that phase shift is ONLY AUDIBLE WHEN IT'S CHANGING, and specifically say it ISN'T when it stops changing.  YOU are taking that out of context, not me.  Oh, and you're wrong as well.  It is QUITE audible in the cello.
I took it as saying that phase change was audible only when it varies in one of the two channels, but not when it varies uniformly.

Quote
Oh, and THIS POST OF YOURS is a straw man, by it's very definition.  A classic example.  I've given plenty of examples.  I hardly think the readership here is interested in it, in this thread.  It exists, it passes peer review.

I'm not sure that you have fully grasped what a straw man argument is.



No, I have to disagree.  He said very specifically, to listen as the phase is changed, you can hear the familiar "phaser" effect as it moves...but when it stops CHANGING, and is in a steady-state at some other phase value, you can't hear it.  Now, I just paraphrased that, because I don't want to go back and dig up the video AGAIN. Now...if you CAN'T hear the phasing sound as it changes, you really need to be in the bleachers instead of at center court...but what I found interesting was that I could clearly hear that the sound had fundamentally changed from where it had been.  Ethan then states that the sound has not changed, and so phase CHANGE is audible but phase DIFFERENCE is not.  I beg to differ.  Definitely, the frequency/amplitude spectrum of the sound has not changed, but the spatial characteristics are wildly different.

Now, that's to me, someone who needs to pay attention to things like how much room in a soundstage a particular instrument takes up, because by gosh I have to fit 12 more in there SOMEWHERE.

To the casual listener, it isn't a salient difference.


Where mixerman was trying to go, was to introduce the concept of THE ARTIST'S STAKE.

When I listen to a bass track from Geddy or Vic or Abe...It sounds really GOOD to me, because I have no frame of reference.  When I listen to a bass part by DWOZ, well, it can have extremely subtle differences and be strikingly different.  Because I have a stake.  I played those notes.  I KNOW what my time sense is.  I know WHERE I played those notes, in time.  Slide that part by 3 ms, and it sounds completely different to me.  (I've had exactly that experience).

Change the start time of a part 3ms, that has 42 Hz tones in it?  and he HEARS that?  damn straight I do.  On my part.  NOT on Geddy's.

You folks all call yourselves "objectivists" in the audio world, as to differentiate yourselves from the audiophiles, the "subjectivists."

Well, the same thing happens in the Artist/Consumer relationship.  The artist has INTENT.  the consumer/listener....not so much.  The consumer has no real context to evaluate a musical passage...so they imprint their own, and basically ANY audio will suffice.  The artist works with intent, and when that intent isn't met, the problem, though so subtle as to be invisible to the casual observer...can result in a COMPLETELY unusable product.

and I am not talking about euphonic distortion here.

I am talking about whether the music production system has translated the INTENT of the artist, into the reproduction system.

If it has, then the artist can "like" the system.  IF it hasn't, then the artist can often tell you quite exactly why that isn't the case.

It means that CONTEXT MATTERS for the relevance and audibility of imperfections in the gear, in the system. 


An audiophile is the LAST person to notice such things.  To an audiophile, they can listen to static or white noise, and derive just as much enjoyment as listening to music...because they're not listening to music.  They're listening to music reproduction equipment.

So anyway...the point is that it is a fundamental mistake to discount CONTEXT when you talk about this stuff.  You have to know which side of the glass you're talking about.  Audiophile tomfoolery is hell-and-gone from the kind of situations you encounter, the kinds of situational effects you deal with, on the music production side.

In music REproduction, you are almost never dealing with summed signals.  In music PROduction, you are ALWAYS dealing with summed signals.  The two different systems are....well....different.

It is a fundamental mistake to conflate the two.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-19 23:42:21
This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215 (http://www.hydrogenaudio.org/forums/index.php?showtopic=45432&view=findpost&p=401215)

This seems to say that ABX can't be used to demonstrate transparency. The two possible outcomes for ABX are:
  • "There is supposed to exist reasonable evidence to support the idea that a difference can be noticed."
  • "This test failed to provide evidence that an audible difference existed."
The latter should not be confused with "There is supposed to exist reasonable evidence to support the idea that a difference cannnot be noticed."

So where are we? If audiophiles could just get a repeatable positive result on an ABX, they'd win outright (or more likely we'd move on to arguments about what differences are significant). But one should bear in mind that a strict reading of the "...failed to provide evidence..." outcome, no matter how many times it is reached, does not help the objectivist. With so much to gain and nothing to lose, maybe audiophiles can learn to love the ABX.



This is my position as well.  In the great debates about audibility, if EVEN ONE respondent can reliably identify the differences, then the point is lost, as far as absolute fact is concerned.  Then, it becomes a haggle over what level of "golden ear" you have to be.  But my previous post discusses one reason that this may be significant.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-20 02:01:19
The part that concerns me, is that one doesn't even have to go to tests, to disprove your conjectures.  They fall on general theory.



This looks for all the world like a totally vague, unsubstantiated claim.

I'd like to see this person who hides behind the Dwoz nym actually stop walking on the ceiling, find just one thing that was actually said in the video, not some paraprhase, and let's take this puppy apart.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-20 02:19:44
In music REproduction, you are almost never dealing with summed signals.  In music PROduction, you are ALWAYS dealing with summed signals.  The two different systems are....well....different.

It is a fundamental mistake to conflate the two.


Help me here. I can't think of any situation involving music reproduction where we aren't dealing with a summed signal.  Can you give me an example?
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-20 02:26:43
Quote from: dwoz link=msg=0 date=
Ethan ... makes a conjecture about the required measurements to fully and completely describe the fidelity of audio. According to him, there's four.



This is what I think is an important issue that was a few posts back from the end of the thread that this thread was split from.

Would it be possible to Ethan to repost what he origionally said about this issue?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-20 02:52:40
The part that concerns me, is that one doesn't even have to go to tests, to disprove your conjectures.  They fall on general theory.



This looks for all the world like a totally vague, unsubstantiated claim.

I'd like to see this person who hides behind the Dwoz nym actually stop walking on the ceiling, find just one thing that was actually said in the video, not some paraprhase, and let's take this puppy apart.


What on earth does that mean?  You callin' me out, girlfriend?

 


Arny, I've been reading your spouting nonsense on the internet pipes since the old days of rec.audio.pro...this is not our first meeting, by far.  Since back before you and tommy nousaine used to scratch each other's eyeballs out with sharpened faux fingernails...


Ok, let's start, shall we?  This will have "math" in it, so go find your kid and ask for his help.

Ethan "debunks" the myth that recording many different tracks through the same components will cause a "stacking up" of that component's sonic characteristic.  He says, that whatever it imparts to the individual tracks, can be COMPLETELY compensated by applying an inverse effect ONCE to the master summing buss.

This is wrong.  The effect is not a myth.  I debunked his debunk.  I felt that it would be a disservice to the kids coming up if they picked up this nonsense and started re-spewing it.

Here's what is wrong:  In order to completely compensate for the effect, he suggests that you can simply apply the inverse to the sum of all tracks.  In MATH, that implies that the transitive property applies to the sum.  in other words, if f(x) is the transfer function of the component:

~f(f(a) + f(b) + f©) = (~f(f(a)) + ~f(f(b)) + ~f(f©)).

in actuality, this statement can't be made.

In real life, f(x) is not linear.  it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)

the transitive property ONLY WORKS if f(x) is a linear function.

thus, the non-linear component, f.non(x) WILL "STACK" and WILL NOT BE REVERSED with the application of an inverse function to the sum.

Ethan will now come in here and say "what bull...he didn't DEFINE f(x)."  The rebuttal is, it is true for ANY non-linear f(x).

Therefore, the MATH decisively shows that stacking of non-linearities WILL OCCUR and CANNOT be compensated for at the mix buss.

This is simple math.  I didn't invent it.  glad to help you.

Not a "vague, unsubstantiated claim".  Simple math.  No need to progress to listening tests.

Not to put too fine a point on it, but reading the advice you give to people around here, I'm not concerned about being refuted, though I'd LOVE to be proved wrong.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-20 02:57:26
In music REproduction, you are almost never dealing with summed signals.  In music PROduction, you are ALWAYS dealing with summed signals.  The two different systems are....well....different.

It is a fundamental mistake to conflate the two.


Help me here. I can't think of any situation involving music reproduction where we aren't dealing with a summed signal.  Can you give me an example?



easy.  put a CD in your favorite CD player, turn up the volume, and sit in your chair.  You are now listening to two discrete reproduced signals coming out of two speakers.  No electrical summing whatsoever.

That describes the music REPRODUCTION system, as contrasted with the music PRODUCTION system.

Never mind live music rigs...that's so far hell-and-gone from any kind of fidelity, it isn't worth wasting breath on.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-20 03:27:46
~f(f(a) + f(b) + f©) = (~f(f(a)) + ~f(f(b)) + ~f(f©)).

in actuality, this statement can't be made.

In real life, f(x) is not linear.  it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)

the transitive property ONLY WORKS if f(x) is a linear function.

thus, the non-linear component, f.non(x) WILL "STACK" and WILL NOT BE REVERSED with the application of an inverse function to the sum.


The intransitivity of f.non(x) does not imply that there isn't a function g(f(a)+f(b)+f©) = ~(f(a)+f(b)+f©). If you wan't to show off high school math skills, do it right. All I have seen from you, since you have registered here, is mis-quoting, chest thumping, and half-cooked knowledge. Despite your scampered frequency of posting, you couldn't, yet, make a single conclusive point against Ethan. This wouldn't be so bad, if you had the mental capacity to actually read and relate your own words to others'. But obviously your skills are somewhat binary in the sense that your brain just attaches a "correct"-tag to anything coming from inside, which seems to prevent any further processing.

Ethan "debunks" the myth that recording many different tracks through the same components will cause a "stacking up" of that component's sonic characteristic.  He says, that whatever it imparts to the individual tracks, can be COMPLETELY compensated by applying an inverse effect ONCE to the master summing buss.


You must have another version of the film playing in your head. Could you quote the position, where Ethan (or anyone else) would say "COMPLETELY", by the way? I think that's not to much to ask for something, you are so sure about, that you cite it in caps.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-20 04:57:15
~f(f(a) + f(b) + f©) = (~f(f(a)) + ~f(f(b)) + ~f(f©)).

in actuality, this statement can't be made.

In real life, f(x) is not linear.  it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)

the transitive property ONLY WORKS if f(x) is a linear function.

thus, the non-linear component, f.non(x) WILL "STACK" and WILL NOT BE REVERSED with the application of an inverse function to the sum.


The intransitivity of f.non(x) does not imply that there isn't a function g(f(a)+f(b)+f©) = ~(f(a)+f(b)+f©). If you wan't to show off high school math skills, do it right. All I have seen from you, since you have registered here, is mis-quoting, chest thumping, and half-cooked knowledge. Despite your scampered frequency of posting, you couldn't, yet, make a single conclusive point against Ethan. This wouldn't be so bad, if you had the mental capacity to actually read and relate your own words to others'. But obviously your skills are somewhat binary in the sense that your brain just attaches a "correct"-tag to anything coming from inside, which seems to prevent any further processing.

Ethan "debunks" the myth that recording many different tracks through the same components will cause a "stacking up" of that component's sonic characteristic.  He says, that whatever it imparts to the individual tracks, can be COMPLETELY compensated by applying an inverse effect ONCE to the master summing buss.


You must have another version of the film playing in your head. Could you quote the position, where Ethan (or anyone else) would say "COMPLETELY", by the way? I think that's not to much to ask for something, you are so sure about, that you cite it in caps.


High school math?  quite true.  Then why do you have it wrong?  OBVIOUSLY there is a function g(x) as you describe.  It just isn't ~f(x).  That's reading for comprehension.  It's also quite rude.  You haven't even seen the video have you?  Clearly not. because Ethan does indeed use the word "completely".  Unless, of course, he's edited it since.

Look, if you guys want, go ahead and carry this guy around on your shoulders.  Good on ya.  Everyone got to have a hero.  Everyone can't  be on the winning team, after all. 

Good luck, and godspeed.
Title: AES 2009 Audio Myths Workshop
Post by: krabapple on 2010-03-20 06:21:26
Well since my last post was moved to a read only section as a "baiting" post, which it wasn't intended as, then let me rephrase the question:

How many of you here in this thread, discussing the debate we've had with Ethan, have actually mixed a full album?

Anyone?

Mixerman


Being a good audio mixer doesn't immunize you from the psychological biases that make blind testing or objective data requirements to scientifically verify claims of audible difference.

Liked your book, btw.
Title: AES 2009 Audio Myths Workshop
Post by: krabapple on 2010-03-20 06:36:35
Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?


First, the result of an ABX should not be expressed that way in a report using scientific language.  The result is properly expressed as something like, 'audible difference was not supported (at the level of statistical significance chosen)'.

*If* a subject says he hears a difference in sighted trials, and also during the ABX test itself, but the ABX results do not support that claim, it would seem that the test itself was not obscuring the subjective perception of difference prima facie and that the ABX is sufficient to demonstrate whether or not the difference heard during teh ABX was likely to be 'real' or not.  However positive controls are an excellent practice and full rigor requires employing them.  Certainly if the subject *claims* to hear no difference  (or claims to have trouble hearing a difference) during the ABX, a positive control must be introduced to establish at what level difference CAN be heard.

 








Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-20 10:44:56
Quote
Ok, let's start, shall we?  This will have "math" in it, so go find your kid and ask for his help.
Which kid should I ask, Dwoz? The boy with a PhD, the girl with a PhD, or the dumb one with just a degree in Chemical engineering and a MBA? Oh, and a dual major in Environmental Engineering. Which one should I ask Dwoz? Do you even have any kids who graduated from High School? Did you ever have a stable relationship that lasted long enough so that you raised any kids at all? Does sex the way you like it even ever make kids? ;-)
As somebody who has no interest in these threads I am more than happy to just close it, if people can't play nice.  There's a lot of petty name calling that I'm willing to ignore, but this has gone past that.

Play nice, or don't play at all. I really don't care either way.


Dwoz apparently thinks he can scare people away with tough talk.

I'm trying to return things to our regular programming.

I finally have some *original words of Ethan* posted.

Anyway, it appears that these are among the statements that Ethan *actually made* and should be defending. note that they are stated in such a way that he's debunking the following misapprehensions:

* That dither is audible on typical program material recorded at sensible levels.

* That jitter is ever audible in non-broken gear.

* That a response past 20 KHz is ever needed.

* That blind testing is not valid.

* That static (non-changing) phase shift in usual amounts is ever audible when the amount of shift is the same left and right.

* That more than four parameters are needed to describe everything that affects audio reproduction.

* That different gear that specs "transparently" as defined in my video has a sound, or sounds different than other transparent gear.

* That prosumer level sound cards cannot achieve professional sounding results.

* That audible stacking occurs with properly functioning gear.

End of list of misapprehesions that Ethan is trying to correct.


Here's what Ethan said on the womb about ADAT versus analog tape:

"I had dinner the other night with a good friend who is a fairly well known recording engineer with a wall full of gold records in his studio lobby. He told me that when ADATs came out - the original black-face model - he did a comparison of analog 2-inch versus ADAT, and the ADAT won handily because it preserved drum transients much better. That matches my experience, but I asked him to repeat what he said anyway to be sure I didn't misunderstand. I figured it would come up here eventually. I won't say who he is here without his permission, though I can't imagine he'd really mind. If you're reading this Peter you are welcome to pipe up to clarify."

So far, I see more than a few differences between what Ethan is purported to have said, and what he actually said. Some similarities as well.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-20 14:03:24
High school math?  quite true.  Then why do you have it wrong?  OBVIOUSLY there is a function g(x) as you describe.  It just isn't ~f(x).  That's reading for comprehension.


So, if there "OBVIOUSLY" is a inverse function, how is your math example in any way applicable to the matter at hand? Ethan did not claim anything about the mathematic nature of inverse filtering. Your transitivity example just disproves that part, which you had made up by yourself and falsely put into Ethan's mouth.

1.) The existence of g(x) proves that transitivity, which you put on the table, doesn't matter relating to the question, wether a series f(a)...f(b)...f© can be reversed in a single step.
2.) Ethan did not claim transitivity of f(x).

So, it seems to be the case, that the function of your math example was plainly rhetorical and even failed at that.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-20 17:29:17
one doesn't even have to go to tests, to disprove your conjectures.

Yet another post saying "Ethan is wrong" without evidence and without saying what is right.

He said very specifically, to listen as the phase is changed, you can hear the familiar "phaser" effect as it moves...but when it stops CHANGING, and is in a steady-state at some other phase value, you can't hear it.

You really need to watch my video again. The "phaser" effect is comb filtering, and that is audible whether it's static or changing.

Am I the only person who has noticed that not one of the nay-sayers has presented a single audio example to prove their point? All they do is try to tear down all of the examples in my video, and call me wrong, but never once have they shown their own example and said what's right.

Dwoz, please post some audio files showing that dither is audible on pop music recorded at sensible levels. Please post an example showing when jitter is audible. Please show us that phase shift can be heard. Please prove with an audio file that stacking is not a myth. And so forth.

If you can't do that, then perhaps you need to change your opinions.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-20 17:37:04
I'd like to see this person who hides behind the Dwoz nym actually stop walking on the ceiling, find just one thing that was actually said in the video, not some paraprhase, and let's take this puppy apart.

Indeed. Attacking people while hiding behind an anonymous screen name is the height of dishonesty.

What I don't understand is why dwoz and Mixerman and malice are so interested in discussing this everywhere they can. If they think I'm a crackpot, why don't they just ignore me instead of "promoting" me with four threads on their forum and even a custom graphic? They're not content to write more than one thousand posts in their own forum, but they also post about my video at Gearslutz and now here too. I've been emailing Mixerman trying to arrange a "truce" whereby he'd let me post freely in his forum. It's not getting anywhere lately. Here's my last question to him, which he seems unable to answer:
[blockquote]"If you think I'm incompetent and disingenuous and giving out bad
advice, why do you even want me posting in your forum at all?"[/blockquote]
--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-20 18:30:40
It's really sad, to see such a nice guy on a honorable mission getting mobbed by anonymous lunatics. You don't deserve that. Why are you doing it at all, if I may ask? Why don't you stick to more level-headed grounds as Hydrogenaudio?

I'm also hiding cowardly behind an anonymous nickname. But for me it makes sense. With billions of people out there, many of them at the edge of sanity, I just don't want to risk my real name getting dragged through the mire by some random anonymous idiot. Not everyone googling my name will find the time to read several pages of arguments until I can shut off a jerk. And I don't even want to be in a position where I would have to do that, to keep my name clear.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-20 20:33:48
It's really sad, to see such a nice guy on a honorable mission getting mobbed by anonymous lunatics. You don't deserve that. Why are you doing it at all, if I may ask? Why don't you stick to more level-headed grounds as Hydrogenaudio?

Yes, it's sad and pathetic even. Especially the anonymous part. They need to grow a pair and use their real names, and be responsible for their opinions.

The reason I made this video, and write many articles (http://www.ethanwiner.com/articles.html#Audio%20Magic), and post on many forums, is very simple - to educate. Consumerism is equally important to me. Guys like these from the Womb, and Audio Asylum, and Stereophile's forum will not be swayed no matter how compelling the evidence. As we see here. But I don't do this to convince them because that's futile. Rather, I write for the scores of people who every day ask in forums what preamp or converter etc they should buy to take their projects to the next level.

After years of non-science and even anti-science BS in audio magazines (home recording magazines too, not just audiophoole rags), many home recordists wrongly believe they'll never get pro results until they invest large sums into expensive and boutique outboard gear. This simply is not true. I'm not necessarily suggesting that all people need is a $25 SoundBlaster, but they certainly don't need to spend $1,000 per channel on preamps and converter to get outstanding quality. Same for hi-fi types who genuinely want to know if they need to spend large sums on cables etc.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-21 00:38:53
Since it's hard to cite where something has not been said, please start by telling which position in the video you are talking about. I don't see much sense in taking your "arguments" serious, since they attack something that hasn't even been claimed by Ethan. You can easily refute this by telling the position in the video you are talking about. Else this discussion doesn't make much sense.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 00:45:53
2.) Ethan did not claim transitivity of f(x).

So, it seems to be the case, that the function of your math example was plainly rhetorical and even failed at that.


But he did!

He said, and please don't shoot me for NOT taking the time to fetch that video and watch it AGAIN...that "if there WAS a buildup or stacking effect from a component, then you could simply apply an inverse of the component's affect, to the main mix buss (2-buss), and remove ALL of it."

Again, that was a paraphrase, but this was his exact meaning and intent.  This implies the math statement I made up-thread.  If you don't agree that this implies a transitive application, then I invite you to supply what you think the equation from the above sentence SHOULD be.  I'm happy to discuss the merits of it.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-21 00:52:44
Did you have anything to say about the math itself, Arny?  Did you see where I was going, where you'd have to be able to apply the inverse function to the summed signal, and have that be the exact same thing as applying it to the source tracks individually?  That's how the mathematical "sentence" is constructed from Ethan's dialog. I wonder if you felt that was rigorous or not, and what I'd have to do to improve it?


I've been recording a music festival for the last 2 days, out of town. There was a hot spot on premises but it had the firewall from #&!!, and wouldn't let me access any forums.

But, I'm back. ;-)

As far as the math goes, I thought that the question had been answered a few times already, and very well. Mr. Dwoz you have already been had by far better than I. But you apparently can't understand what those other said or you would have already slunk out of here with your tail between your legs.  You seem want me to put my spin on it. So, here we go, yet another nail in your coffin!

Yes Dwoz, in high school you learned what you posted. Unfortunately, you apparently didn't take any univeristy engineering courses that cover the same topic and add a more practical real world perspective.

Let me quote you exactly Dwoz:

Quote from: dwoz link=msg=0 date=
Here's what is wrong: In order to completely compensate for the effect, he suggests that you can simply apply the inverse to the sum of all tracks. In MATH, that implies that the transitive property applies to the sum. in other words, if f(x) is the transfer function of the component:

~f(f(a) + f(b) + f©) = (~f(f(a)) + ~f(f(b)) + ~f(f©)).

in actuality, this statement can't be made.

In real life, f(x) is not linear. it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)
Here's what is wrong: In order to completely compensate for the effect, he suggests that you can simply apply the inverse to the sum of all tracks. In MATH, that implies that the transitive property applies to the sum. in other words, if f(x) is the transfer function of the component:

~f(f(a) + f(b) + f©) = (~f(f(a)) + ~f(f(b)) + ~f(f©)).

in actuality, this statement can't be made.

In real life, f(x) is not linear. it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)

the transitive property ONLY WORKS if f(x) is a linear function


Now I'll quote the statement that Ethan made that you mistakenly think you proved false:

Quote from: ethan link=msg=0 date=
* That audible stacking occurs with properly functioning gear.


Now I'll focus on what you said that is irrelevant to what Ethan said:

Quote from: dwoz link=msg=0 date=
In real life, f(x) is not linear. it has a linear component, and a non-linear component:

f(x) = f.lin(x) + f.non(x)


In Ethan's "properly functioning" gear,  your equation

Quote from: dwoz link=msg=0 date=
f(x) = f.lin(x) + f.non(x)


Can be filled in with some real world numbers for properly functioning gear.

f(x) = 1.0 * f.lin(x) + < 0.0001 * f.non(x)

IOW, the properly functioning good quality mixers that most of us use have less than  0.01%  of any and all kinds of nonlinear distortion.  In this day and age, even Soundblaster cards and cheap Behringeri mixers have less than 0.01% THD.

My 02R96 is actually theoretically and practically distortionless as long as I don't operate  it outside the digital domain. ;-)

Therefore it is safe to ignore the tiny real world nonlinearity that you have staked your argument on.  Ethan put in the necessary hedge words to include that tiny amount of nonlinearity that it is safe to ignore.

So, the real world version of your equation reduces to: f(x) = f.lin(x).

Therefore  Dwoz, you didn't find an error in what Ehtan wrote. You changed what Ethan wrote into something of your own creation by dropping out some key words, and then used a theorectical argument that is irrelevant to the real world of 2010.

Quote from: ethan link=msg=0 date=
* That audible stacking occurs with properly functioning gear.


The key words that Dwoz left out are:

audible
properly
funtioning

IOW, aobut half of what Ethan said.

There's a reason you don't post under your real name "Dwoz" - when you get caught short like this it doesn't have any consquences. If you trash the rep of the nym Dwoz enough with dumb mistakes like these, you just pop up with a new nym.









Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 01:13:27
No...I've never felt the need to use sock puppets.  The only time I ever use a sock puppet user account, is to test my own forum, so I can see what a regular user sees and test user and group permissions.  But I never post with that handle.

The thing is, it's ALL ABOUT those "key words".  When you make the declarations that Ethan's made, there is a perceived "truth" there.  People hear the absolute statements about things like "stacking".  They MISS the quick "weasel words" that equivocate the point. 

Yes, "weasel words" is a pejorative. They are exactly the kinds of words that the audiophile crazies use.  They must be avoided like bubonic plague.

So, Ethan makes his point, then throws the red meat back into the trash, by uttering equivocation about "for all intents and purposes" and "audible" and such.

My point is this:  Stacking, as a matter of absolute fact, does indeed exist,and cannot be completely remedied by functions applied to later summed artifacts of the process.    Then, it is a second, and perhaps just as important fact, to acknowledge that this stuff is something you'll have to work to make audible. 

Like recording 75 tracks of dialog on the same preamp and mic, along with 25 tracks of foley and another 20 of ambience.  Do you think it's outlandish that a film post mixer will have upwards of 125 tracks at once?  It isn't.  That user needs to worry about "stacking" and he is done a disservice if he thinks it doesn't exist at all.

It is SO IMPORTANT to be careful when you talk about this stuff!  Ethan makes his point, then throws it away by qualifying it!  He needs to NOT do this, and his video will be so much stronger!  Saying "using today's gear" is DAMAGING to his point, but he won't see that.  Instead of using the equivocation, he needs to talk about just when the thresholds will BECOME audible.  Like my friend the film post mixer... then his audience can make the judgment on their own.  Otherwise, they eventually hear the effect, in some deranged special case, and instead of saying "Ethan was right, and here's the threshold of audibility!" they will INSTEAD say "Ethan was WRONG!".

Isn't this all about actually using rigorous science to prove things?


dwoz
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 01:23:23


Just to clarify:

Ethan's conjecture is that the stacking of component characteristic is a myth, because it can be simply eliminated by applying a single inverse "value" of the characteristic, to the summed master.

In this definition, he elaborates by saying that if a component put a 3dB 'bump' at a certain frequency, and you used that component on 10 tracks, then you don't need to remove 30dB of "bump" from the master, but only 3dB.  So far so good.  But then he pollutes this by saying that you can completely compensate using one 3dB inverse function. 

That's the rub!  you can NOT completely compensate!  Because it's non-linear!!!!!

Also, it isn't just frequency and amplitude where f(x) can manifest.  Imagine that you have a component that acts as an all-pass, but has a non-linear phase response.  Stack those puppies up and try to apply an inverse.  Not gonna happen!

the important thing here is the realize that you must be EXTREMELY careful how you generalize a statement.  You can take a perfectly true fact, and make it false by applying it too broadly.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-21 01:52:08
Ethan's conjecture is that the stacking of component characteristic is a myth, because it can be simply eliminated by applying a single inverse "value" of the characteristic, to the summed master.

In this definition, he elaborates by saying that if a component put a 3dB 'bump' at a certain frequency, and you used that component on 10 tracks, then you don't need to remove 30dB of "bump" from the master, but only 3dB.  So far so good.  But then he pollutes this by saying that you can completely compensate using one 3dB inverse function. 

That's the rub!  you can NOT completely compensate!  Because it's non-linear!!!!!


I don't follow here.  If you have a summer with N inputs, and the identical linear filter in each of the N paths leading to the summer, then barring nonlinearity you can (conceptually) move that linear filter after the summer.  This assumes no limiters. compressors, clipping op-amps or other grossly nonlinear devices in the chain between any of these filters and their corresponding input to the summer.  If that linear filter is minimum-phase, then its inverse is stable and you can correct for the filter's inclusion with a single filter at the summer output that implements the reciprocal of the transfer function of the original filter.

Also, it isn't just frequency and amplitude where f(x) can manifest.  Imagine that you have a component that acts as an all-pass, but has a non-linear phase response.  Stack those puppies up and try to apply an inverse.  Not gonna happen!


The issue there is that analog all-pass filters are non-minimum phase devices, so their inverse is unstable.  That's why you can't in general compensate for their effects in an exact way.

the important thing here is the realize that you must be EXTREMELY careful how you generalize a statement.  You can take a perfectly true fact, and make it false by applying it too broadly.


This points out the need to qualify one's statements.  As above, the filter must be linear and minimum-phase and all the components in each path from the filter to the summer must be linear for the correction to work.  So generalizing is not always possible.  Yet you're also getting on Ethan's case for qualifying his statements.  You're telling him, in effect, that he shouldn't qualify his statements, nor should he generalize them.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 02:08:58
Ethan's conjecture is that the stacking of component characteristic is a myth, because it can be simply eliminated by applying a single inverse "value" of the characteristic, to the summed master.

In this definition, he elaborates by saying that if a component put a 3dB 'bump' at a certain frequency, and you used that component on 10 tracks, then you don't need to remove 30dB of "bump" from the master, but only 3dB.  So far so good.  But then he pollutes this by saying that you can completely compensate using one 3dB inverse function. 

That's the rub!  you can NOT completely compensate!  Because it's non-linear!!!!!


I don't follow here.  If you have a summer with N inputs, and the identical linear filter in each of the N paths leading to the summer, then barring nonlinearity you can (conceptually) move that linear filter after the summer.  This assumes no limiters. compressors, clipping op-amps or other grossly nonlinear devices in the chain between the filter and the summer.  If that linear filter is minimum-phase, then its inverse is stable and you can correct for the filter's inclusion with a single filter at the summer output that implements the reciprocal of the transfer function of the original filter.

Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.
Quote
Also, it isn't just frequency and amplitude where f(x) can manifest.  Imagine that you have a component that acts as an all-pass, but has a non-linear phase response.  Stack those puppies up and try to apply an inverse.  Not gonna happen!


The issue there is that analog all-pass filters are non-minimum phase devices, so their inverse is unstable.  That's why you can't in general compensate for their effects in an exact way.

again, we're in complete agreement.
Quote
the important thing here is the realize that you must be EXTREMELY careful how you generalize a statement.  You can take a perfectly true fact, and make it false by applying it too broadly.


This points out the need to qualify one's statements.  As above, the filter must be linear and minimum-phase and all the components in each path from the filter to the summer must be linear for the correction to work.  So generalizing is not always possible.  Yet you're also getting on Ethan's case for qualifying his statements.  You're telling him, in effect, that he shouldn't qualify his statements, nor should he generalize them.


It doesn't NEED to be minimum-phase, but that would be nice.  It just needs to be linear phase as well as frequency and amplitude linear.

I'll give an example.  I have a hypothetical component that is frequency linear.  It demonstrates THD + IMD of 0.000002 %.  (again, I DID say hypothetical).  But that component has a non-linear phase response.  So, it tests VERY well.  It's spec looks BEAUTIFUL.  But it cannot have it's phase non-linearity removed once summed with other tracks.  The phase problem in the source has been turned into a frequency problem in the sum.  And it's dynamic, so we're "screwed".

Now, I play this hypothetical example for someone of reasonable facility in the art, and then play Ethan's video.  What is his only possible response?

It is exactly this kind of case that I'm worried about.  It's the qualitative nature of the qualifications he makes about the system.  They have to be well-bounded and rigorous.  Now, this will confuse some people who like nice, easy answers.  But the answer can only be as easy as it can be.  If you make it any easier, then you've made it "wrong".

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 02:35:00
By the way, I just have to share.

I went to a site recently, "mitcables.com" and read a white paper on "articulation" of cables.


...in which the bozo that wrote it discussed how important it was to have "articulate" cables.

He apparently put up some impedance plots of some equipment (like you see for speakers and such, to show their resonance, etc.), but instead of calling it impedance vs. frequency, he called it "articulation" vs. frequency.


Is there any jurisdiction in this world where this guy can LEGALLY have his O2 input attenuated by, say, 55dB?


I'd do the honors.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-21 02:35:34
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.


Okay, but if one is trying to have a productive discussion, it's helpful for each side to completely understand what the other is saying.  Toward that end, it's often useful to start with an idealized scenario that everyone can agree on, then start introducing non-ideal elements one by one to get to the real-world system.

It doesn't NEED to be minimum-phase, but that would be nice.  It just needs to be linear phase as well as frequency and amplitude linear.


I'm saying that in the scenario I outlined, with identical filters ahead of the summer, in order to correct for the effects of the filter it must be minimum-phase.  If it isn't, its inverse is unstable and one cannot correct for it.

I'll give an example.  I have a hypothetical component that is frequency linear.  It demonstrates THD + IMD of 0.000002 %.  (again, I DID say hypothetical).  But that component has a non-linear phase response.  So, it tests VERY well.  It's spec looks BEAUTIFUL.  But it cannot have it's phase non-linearity removed once summed with other tracks.


Are you talking about just a single filter in one branch of the summer?  I'm not sure.  What I'm talking about is an identical filter in each path to the input of the summer.  In the scenario I'm talking about, the phase nonlinearity of said filter is a non-issue.  In fact, all lumped-parameter analog filters have nonlinear phase response.  In my scenario, minimum-phase is the only requirement to be able to correct for the filter's presence.  But again, I'm assuming an identical filter in each branch.  If that's different from what you're assuming, please spell out what's different.

The phase problem in the source has been turned into a frequency problem in the sum.  And it's dynamic, so we're "screwed".


I don't follow.  Please spell out as specifically as possible the scenario you're referring to.  Also, as someone else mentioned, it would help to know which position in Ethan's video you're referring to.  I downloaded it but haven't looked at it since shortly after he first released it.

Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 02:50:09
I'll give an example.  I have a hypothetical component that is frequency linear.  It demonstrates THD + IMD of 0.000002 %.  (again, I DID say hypothetical).  But that component has a non-linear phase response.  So, it tests VERY well.  It's spec looks BEAUTIFUL.  But it cannot have it's phase non-linearity removed once summed with other tracks.


Are you talking about just a single filter in one branch of the summer?  I'm not sure.  What I'm talking about is an identical filter in each path to the input of the summer.  In the scenario I'm talking about, the phase nonlinearity of said filter is a non-issue.  In fact, all lumped-parameter analog filters have nonlinear phase response.  In my scenario, minimum-phase is the only requirement to be able to correct for the filter's presence.  But again, I'm assuming an identical filter in each branch.  If that's different from what you're assuming, please spell out what's different.



I'm sorry, I'm talking about a more real-world extension of your hypothetical.  Instead of "component", use the word "filter" as you have.  But back to the "real world" issue...As soon as you introduce a reactive impedance into your circuit, I think you give up the ability to talk about minimum-phase AND frequency magnitude.  Thus, only in the hypothetical world can you both have your cake and eat it too.

Quote
The phase problem in the source has been turned into a frequency problem in the sum.  And it's dynamic, so we're "screwed".


I don't follow.  Please spell out as specifically as possible the scenario you're referring to.  Also, as someone else mentioned, it would help to know which position in Ethan's video you're referring to.  I downloaded it but haven't looked at it since shortly after he first released it.


This is the problem...Ethan does not cover this.  I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument...much the way my "anonymous nym" is used to invalidate my points.  Neither are fair, but both are fair game.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-21 02:53:36
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-21 03:08:48
I'm sorry, I'm talking about a more real-world extension of your hypothetical.  Instead of "component", use the word "filter" as you have.  But back to the "real world" issue...As soon as you introduce a reactive impedance into your circuit, I think you give up the ability to talk about minimum-phase AND frequency magnitude.  Thus, only in the hypothetical world can you both have your cake and eat it too.


Minimum-phase is just a property that can be ascribed to some linear circuits.  This includes reactive impedances (inductance, capacitance, etc).  If it's linear, the relationship between its input and its output in the frequency domain is completely described by its transfer function.  The transfer function for such a circuit also determines its time-domain behavior.  Of course, all real-world circuits are nonlinear to some degree, but many of them, such as low-distortion op-amps and other components, can be treated as if they were linear as long as they are operated within sensible limits.

Sometimes that's easier said than done though.  One can always, and easily, come up with ways to violate this.  Put an op-amp with a gain of 40 dB into a 40 dB pad and the usable output voltage swing will be reduced by 100x.  That's just one of an infinite number of examples one can come up with.

This is the problem...Ethan does not cover this.  I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument...much the way my "anonymous nym" is used to invalidate my points.  Neither are fair, but both are fair game.


Well, I'm an anonymous guy too, so I won't hassle you about that .

I can draw a block diagram of what I'm talking about, scan and post it so there's no confusion.  But it's tune time for me now.  I have to stop at 10:00 PM, so I've got a little less than an hour and I don't like headphones much.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-21 03:13:04
The all-pass discussion is muddying things. There is a tangle of audibility of phase changes and linear system behavior.

The contention that different phase responses are indistinguishable to the ear is debatable as far as I'm concerned. Here's a paper (http://www.cirrus.com/en/pubs/whitePaper/DS668WP1.pdf) that discusses the issue.

An all-pass is a linear operation. You can apply it at the individual channels or at the sum and you'll get the same sound. What you can't do is apply different linear processes to the individual channels and expect to find some sort of transform that you can apply at the sum to give you the same sound (or invert the individual transforms).
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-21 17:02:31
I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument.


Problem is, doing such a thing in no way invalidates Ethan's argument.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-21 17:06:06
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.


Right, and even the general run of analog consoles is very, very linear.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 17:49:18
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.


Right, and even the general run of analog consoles is very, very linear.



My previous answer to Notat's post was deleted in the lawnmower-fest that went through this thread this morning...


But I think that's a breathtaking assertion to make.  I don't agree at all.  But even if it were true, the point would be moot...the whole argument is not around the competence of the summing, but the competence of the source.  Give me a perfect console fed by garbage IO converters, and we have the same problem.

While we can talk about the concept in the hypothetical, that hypothetical should describe the system as a whole. 

While it's true that most people now choose a console based on workflow, than on sonics, the implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-21 17:51:15
I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument.


Problem is, doing such a thing in no way invalidates Ethan's argument.


Which argument, the argument that stacking exists, or the relativistic one of whether a given average person can hear it in a DBT?
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-21 18:10:11
I just wanted to let people know that the stacking discussion in Ethan's video is at 28:28 (about half way into the video).
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-21 19:52:12
My previous answer to Notat's post was deleted in the lawnmower-fest that went through this thread this morning... But I think that's a breathtaking assertion to make.  I don't agree at all.
Before further forking this thread to discuss mathematical precision, please do us the favor of searching the forums for previous discussions. It is a topic most of us are quite familiar with. Here's (http://www.hydrogenaudio.org/forums/index.php?showtopic=78681) one of the more recent discussions.

Implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.
Arnold said that analog consoles are linear. Linearity (http://en.wikipedia.org/wiki/Linear_system) doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-21 20:09:21
I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument.


Problem is, doing such a thing in no way invalidates Ethan's argument.


Which argument, the argument that stacking exists, or the relativistic one of whether a given average person can hear it in a DBT?


Are you now claiming that Ethan asserted that stacking doesn't even exist?

I've just listened to what Ethan said about stacking, and he seemed to provide orthodox information about of how stacking affects noise and distortion.

Please feel free to quote Ethan acurately, and show where he either  failed to explain stacking in an orthodox fashion, or where orthodox explanations of how stacking affects noise and distortion are wrong or incomplete.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-21 20:28:18
My previous answer to Notat's post was deleted in the lawnmower-fest that went through this thread this morning... But I think that's a breathtaking assertion to make.  I don't agree at all.
Before further forking this thread to discuss mathematical precision, please do us the favor of searching the forums for previous discussions. It is a topic most of us are quite familiar with. Here's (http://www.hydrogenaudio.org/forums/index.php?showtopic=78681) one of the more recent discussions.

Implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.


Arnold said that analog consoles are linear. Linearity (http://en.wikipedia.org/wiki/Linear_system) doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.


Exactly.  Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interferring signals. There are no other known kinds of system signal response faults. The list is constrained by the 2-dimensional nature of electrical signals.  Any of them can cause a system to be readily differentiated by means of listening if they are severe enough. It is all about quantification. 

I only mentioned nonlinear distortion, so it is hard to understand how one might logically progress from my statement to a statement that system (in this case analog console) sonics are non-differentiable. Straw man, anyone? ;-)

I'm willing to stipuate that analog console sonics are often readily differentiated based on noise, interferring signals, and linear distortion.  IME one of the most common causes of  differentiatiable sonics may be frequency response variations caused by improperly centered or misdesigned or otherwise poorly implemented tone controls.  Another problem that I often observe is that it is virtually impossible to readjust the controls of an analog console so that you recreate the same mix within say +/- 0.1 dB. If you manually move the controls during the mix, then that is nearly impossible to recreate precisely as well.  Also, it is not unusual to find mic preamps (a standard component of most consoles) that relate to audible differences because they load some microphones differently in ways that affect the microphone's  frequency response. It is not uncommon to find mic preamps with with built-in fixed (butnot always well-documented) roll-offs on the order of -3 dB at 50 or 80 Hz, which can be easy to hear as well.

Doing a proper listening experiment to compare analog console sonics seems like a probable waste of time now that good digital consoles are so readily available.
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-21 21:12:48
...the whole argument is not around the competence of the summing, but the competence of the source.


It looks to me that just the opposite is true.  Here's my rationale.

Suppose the summer has N inputs: v1(t), v2(t), ... vN(t).  Further, assume v1(t) consists of two components:  the ideal, undistorted voltage v1i(t) that would appear if the upstream components were perfect, and the distortion components of v1(t) which I'll call v1d(t).  So v1(t)=v1i(t)+v1d(t), and the same for the rest of the N-1 input voltages.

Now assume the summer has no distortion, such that its output is given by:  vout(t) = A1*v1(t)+A2*v2(t)+...+AN*vN(t)

For each of v1(t), v2(t), ... vN(t), express it as the sum of its ideal and distortion components, then plug those expressions for v1(t), v2(t), ... vN(t) into to the one above for the output voltage of the summer.  It should be clear that the relationships between the distortion component of each signal and its ideal component at the output of the summer has not changed from what they are at the input.  This is just what Ethan said in his video.

Now assume the summer generates distortion itself.  It should be clear that the relationship of each input signal's distortion component to its ideal component will in general have changed, as well as there being intermodulation distortion between the N signals at the output.  If one assumes an analog summer implemented with a single invertiing op-amp and N summing resistors, then the more inputs the summer has, the larger its noise gain and the less feedback it has around it.  This means op-amp distortion increases as more inputs are added, due to the reduction of feedback.  So the recipe looks like, "add more inputs and the op-amp distortion increases, while at the same time there's more input signals to intermodulate with each other at the output".  This seems to me to be a very plausible explanation for the likely cause of reported stacking problems - in analog consoles at least.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-22 00:44:39
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.


Okay, but if one is trying to have a productive discussion, it's helpful for each side to completely understand what the other is saying.  Toward that end, it's often useful to start with an idealized scenario that everyone can agree on, then start introducing non-ideal elements one by one to get to the real-world system.


It would seem so, but the problem is that very often people who SHOULD know better fail to make the proper distinctions between the hypothetical example and real world situation and attempt to make extrapolations into the real world that simply are not valid. So often the "attempt to have a productive discussion" is in fact quite the opposite.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-22 00:48:47
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.

sez you. the reality is different unless you start piling on the qualifiers such as "within the stated pass band".
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-22 00:59:56
I'm sorry, I'm talking about a more real-world extension of your hypothetical.  Instead of "component", use the word "filter" as you have.  But back to the "real world" issue...As soon as you introduce a reactive impedance into your circuit, I think you give up the ability to talk about minimum-phase AND frequency magnitude.  Thus, only in the hypothetical world can you both have your cake and eat it too.


Minimum-phase is just a property that can be ascribed to some linear circuits.  This includes reactive impedances (inductance, capacitance, etc).  If it's linear, the relationship between its input and its output in the frequency domain is completely described by its transfer function.  The transfer function for such a circuit also determines its time-domain behavior.  Of course, all real-world circuits are nonlinear to some degree, but many of them, such as low-distortion op-amps and other components, can be treated as if they were linear as long as they are operated within sensible limits.

Sometimes that's easier said than done though.  One can always, and easily, come up with ways to violate this.  Put an op-amp with a gain of 40 dB into a 40 dB pad and the usable output voltage swing will be reduced by 100x.  That's just one of an infinite number of examples one can come up with.

This is the problem...Ethan does not cover this.  I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum, and this WILL be used by somebody to jam a shiv into Ethan's argument...much the way my "anonymous nym" is used to invalidate my points.  Neither are fair, but both are fair game.


Well, I'm an anonymous guy too, so I won't hassle you about that .

I can draw a block diagram of what I'm talking about, scan and post it so there's no confusion.  But it's tune time for me now.  I have to stop at 10:00 PM, so I've got a little less than an hour and I don't like headphones much.

You're making some dangerous assumptions here.

First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.

Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec. In reality this is NEVER the case - there is always a tolerance range. NO GIVEN PART EVER EXACTLY MATCHES THE SPEC SHEET. Usually the tolerances are "close enough for government work" - but not always, and the behavior of the real world device always deviates a little bit from ideal. Furthermore in real world systems these deviations from the ideal add up. Sometimes they add up in such a way that they cancel out, but sometimes they add up so as to reinforce the deviation.

Nothing, and only nothing, is perfect.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-22 01:07:09
The all-pass discussion is muddying things. There is a tangle of audibility of phase changes and linear system behavior.

The contention that different phase responses are indistinguishable to the ear is debatable as far as I'm concerned. Here's a paper (http://www.cirrus.com/en/pubs/whitePaper/DS668WP1.pdf) that discusses the issue.

An all-pass is a linear operation. You can apply it at the individual channels or at the sum and you'll get the same sound. What you can't do is apply different linear processes to the individual channels and expect to find some sort of transform that you can apply at the sum to give you the same sound (or invert the individual transforms).

In an experiment posted at The Womb, James (J_J) Johnston proved that phase is indeed audible.
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-22 02:32:36
You're making some dangerous assumptions here.

First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Let's take a specific example - op-amps in an analog mixer.  When they are operated out of their linear region, in clipping, the result is very bad sound.  Op-amps have very low distortion up to clipping because of high feedback.  But as soon as clipping occurs, all bets are off because the clipping is so abrupt.  So clipping an op-amp is a pretty terrible error, but may go unnoticed if it only occurs for a brief instant.

Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec. In reality this is NEVER the case - there is always a tolerance range. NO GIVEN PART EVER EXACTLY MATCHES THE SPEC SHEET.


Not sure where you got that one.  It wasn't from my post, as I said nothing even resembling that.  Spec sheets give a range of values, so there is literally no concept of "EXACTLY MATCHES THE SPEC SHEET".  In the absence of a failed part, they should within the tolerances specified by the spec sheet, provided they are tested in the same way as the spec sheet.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 03:40:08


Implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.


Arnold said that analog consoles are linear. Linearity (http://en.wikipedia.org/wiki/Linear_system) doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.


Exactly.  Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interferring signals. There are no other known kinds of system signal response faults. The list is constrained by the 2-dimensional nature of electrical signals.  Any of them can cause a system to be readily differentiated by means of listening if they are severe enough. It is all about quantification. 

I only mentioned nonlinear distortion, so it is hard to understand how one might logically progress from my statement to a statement that system (in this case analog console) sonics are non-differentiable. Straw man, anyone? ;-)


That's very interesting, Arnold.  Can you elaborate on these 4 characteristics?  In stating this, you seem to be rebutting Ethan's own 4 characteristics.  could you provide some proof of this?  I'd be interested to see it.  Can you elaborate on the windowing you'd have to do to get relevant usable numbers out of those 4 characteristics?

The source of my confusion with respect to your statement about linearity of consoles, comes from my perhaps limited understanding of what you mean by LINEAR.  In Ethan's nomenclature, a component that has the kind of great specs you mention, particularly a level of linearity in that range, are what he calls "transparent".  In my understanding, a high degree of transparency means that the component itself doesn't impart any "sound" to the audio.  Thus, two highly transparent components would sound pretty much the same.  This is what Ethan says, and it sounds pretty much true.  Are you then rebutting him?  If you're not, then I'm sure I have no idea what you mean by "linear".

could you please elaborate?

I'm willing to stipuate that analog console sonics are often readily differentiated based on noise, interferring signals, and linear distortion.  IME one of the most common causes of  differentiatiable sonics may be frequency response variations caused by improperly centered or misdesigned or otherwise poorly implemented tone controls.  Another problem that I often observe is that it is virtually impossible to readjust the controls of an analog console so that you recreate the same mix within say +/- 0.1 dB. If you manually move the controls during the mix, then that is nearly impossible to recreate precisely as well.  Also, it is not unusual to find mic preamps (a standard component of most consoles) that relate to audible differences because they load some microphones differently in ways that affect the microphone's  frequency response. It is not uncommon to find mic preamps with with built-in fixed (butnot always well-documented) roll-offs on the order of -3 dB at 50 or 80 Hz, which can be easy to hear as well.

Doing a proper listening experiment to compare analog console sonics seems like a probable waste of time now that good digital consoles are so readily available.



I must confess...I've never heard of "linear distortion".  What is it?  Isn't ALL distortion by definition non-linear?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 04:00:24
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.


Can you prove this?  It sounds like you're making a conjecture.  I know how fun that is, I am always accused of making conjecture. 

If one were to have to prove this, how would one even go about accomplishing that proof?

By the way, are you rebutting Ethan?  He said that dither is basically inaudible, so in mentioning it, do you refute him?
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-22 04:09:44
Can you prove this?  It sounds like you're making a conjecture.  I know how fun that is, I am always accused of making conjecture. 

If one were to have to prove this, how would one even go about accomplishing that proof?


Induction is always problematic. But it is to be going easy for you to find out, just as it was easy for me to find out: Load up an original and a looped back sample into an ABX program and see if you can differentiate it. The last time I tried I stopped after 20 loop backs on a DG Archiv Produktion of Beethoven. It was just too hard. And that was just looped through the commodity sound chip of an Apple Macbook Pro.

If you're rather out for more general statements. The limits of human auditory perception are well researched. Just measure the looped back version and compare it to those thresholds. You'll see, that your concerns aren't justified by reality.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 04:27:22
I must confess...I've never heard of "linear distortion".  What is it?  Isn't ALL distortion by definition non-linear?

Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interfering signals.

Maybe you wouldn't classify frequency response errors as "distortion". Not everyone does. It's a just a terminology thing. Nothing to get hung up on.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 04:39:12
With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.
sez you. the reality is different unless you start piling on the qualifiers such as "within the stated pass band".
You are correct, "within the stated pass band" is an important assumption that I did not include in my original post. All are very compatible with the real world. I don't think there are any others.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 05:05:31
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.


Can you prove this?  It sounds like you're making a conjecture.  I know how fun that is, I am always accused of making conjecture. 

If one were to have to prove this, how would one even go about accomplishing that proof?

By the way, are you rebutting Ethan?  He said that dither is basically inaudible, so in mentioning it, do you refute him?

As a practical matter with today's 24-bit converters, Ethan may be right. As a matter of mathematics, I think he goes too far.

The linearity proof goes like this:
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 05:35:51
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.


Can you prove this?  It sounds like you're making a conjecture.  I know how fun that is, I am always accused of making conjecture. 

If one were to have to prove this, how would one even go about accomplishing that proof?

By the way, are you rebutting Ethan?  He said that dither is basically inaudible, so in mentioning it, do you refute him?

As a practical matter with today's 24-bit converters, Ethan may be right. As a matter of mathematics, I think he goes too far.

The linearity proof goes like this:
  • Niquist-Shannon tells you that all of the information within the frequency range of interest is encoded in the digital sampled version of the signal.
  • Quantization theory tells you that limited bit resolution adds noise to your signal.
  • Dither ensures that quantization noise is not correlated with the signal. An uncorrelated signal can be treated as a separate signal added to the original - signal and noise come in, signal and noise go out, there is no interaction or intermodulation between the two.
  • The operations performed in the signal path of a digital mixer are addition and multiplication. Addition and multiplication are linear operations. Any operation built from a linear combination of linear operations is itself a linear operation.



So, basically "a digital console or workstation" is not "ALL digital consoles or workstations", but only the specific ones that only do summing, gain changes, and...panning.  Every OTHER digital console is not included in your term?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 05:43:30
Can you prove this?  It sounds like you're making a conjecture.  I know how fun that is, I am always accused of making conjecture. 

If one were to have to prove this, how would one even go about accomplishing that proof?


Induction is always problematic. But it is to be going easy for you to find out, just as it was easy for me to find out: Load up an original and a looped back sample into an ABX program and see if you can differentiate it. The last time I tried I stopped after 20 loop backs on a DG Archiv Produktion of Beethoven. It was just too hard. And that was just looped through the commodity sound chip of an Apple Macbook Pro.

If you're rather out for more general statements. The limits of human auditory perception are well researched. Just measure the looped back version and compare it to those thresholds. You'll see, that your concerns aren't justified by reality.


Ahhh....I see.  so basically if something is equally LINEAR to something else (like, source to output in a console), then it will be sonically indistinguishable?  I would LIKE to agree, but that seems to have already been thoroughly debunked up higher in the thread.  You're basically calling Arnie Krueger and Notat a liar.  Are you correct, or are they? 

Notat said:
Quote
QUOTE (dwoz @ Mar 21 2010, 11:49) *
Implication you're making is that analog console sonics are non-differentiable. (for modern kit). I think that's a bold statement to make that will get you in trouble later.

QUOTE (Notat @ Mar 21 2010, 15:52) *
Arnold said that analog consoles are linear. Linearity doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.

QUOTE (Arnold B. Krueger @ Mar 21 2010, 16:28) *
Exactly.

<snip>

I only mentioned nonlinear distortion, so it is hard to understand how one might logically progress from my statement to a statement that system (in this case analog console) sonics are non-differentiable. Straw man, anyone? ;-)
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 05:56:01
Induction is always problematic. But it is to be going easy for you to find out, just as it was easy for me to find out: Load up an original and a looped back sample into an ABX program and see if you can differentiate it. The last time I tried I stopped after 20 loop backs on a DG Archiv Produktion of Beethoven. It was just too hard. And that was just looped through the commodity sound chip of an Apple Macbook Pro.

If you're rather out for more general statements. The limits of human auditory perception are well researched. Just measure the looped back version and compare it to those thresholds. You'll see, that your concerns aren't justified by reality.


Let's be rigorous about this, okay?  I am not exactly sure what you mean when you say "looped back sample", and "original".  We're talking about whether we can test that a console or workstation is ideally linear.  Could you define those terms more specifically?

Also, if we're talking about linearity, it's either ideally linear or it's not, right?  we can just hook up an oscilloscope to the output and learn everything we need to know, right?  or simply invert the polarity of the output and sum it with the input and look for whatever didn't null, right?

Why is this about listening?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 06:26:30
I must confess...I've never heard of "linear distortion".  What is it?  Isn't ALL distortion by definition non-linear?

Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interfering signals.

Maybe you wouldn't classify frequency response errors as "distortion". Not everyone does. It's a just a terminology thing. Nothing to get hung up on.



So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-22 12:26:57
With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.
sez you. the reality is different unless you start piling on the qualifiers such as "within the stated pass band".
You are correct, "within the stated pass band" is an important assumption that I did not include in my original post. All are very compatible with the real world. I don't think there are any others.


It depends on who you are conversing with and what the purpose of the discussion is. For instance, what you say isn't true if the device is operated in the centre of the sun, for instance. I'll bet its behaviour will not be linear then. So yeah, see, reality is more complicated than your simple models.

Now whether this is a reasonable thing to say would depend on what my purpose is, doesn't it?
Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-03-22 13:03:27
So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?

Are you being purposely obtuse or are you really not that smart?

This isn't exactly rocket science you know!

Linear distortion is any change in the signal that is not level dependent.

Non-linear distortion IS level dependent.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 14:03:34
So, basically "a digital console or workstation" is not "ALL digital consoles or workstations", but only the specific ones that only do summing, gain changes, and...panning.  Every OTHER digital console is not included in your term?
Equalization is also done with multiplication and addition. My point is that there's not any unintentional non-linearities in any properly-implemented digital console. There are no compromises designers need to make. There are no minute imperfections or any non-ideal behavior in the way DSPs do the math.

Consoles can do non-linear processing (e.g. dynamic range compression, distortion plugins...). You turn that on and you're telling the console you don't want linear behavior - it does what you ask.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 15:01:27
So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?

Are you being purposely obtuse or are you really not that smart?

This isn't exactly rocket science you know!

Linear distortion is any change in the signal that is not level dependent.

Non-linear distortion IS level dependent.


My dear friend, your reply is interesting on many levels.  First off, it is a vicious personal attack.  Next it is an ad-hominem attack on the premise of the question.  Next, we can talk about rudeness.  But all that aside, I'm afraid your answer didn't help me much.  What do you mean by "level-dependent"?  Do you mean, depending on the input level?  So, "linear" distortion is distortion that is there no matter what voltage or frequency I apply to my input, but "non-linear" distortion is not?

Also, I have found, FINALLY, a couple references that actually use the words "linear distortion".  The references seem to be specific to loudspeakers, and seem to anecdotally mention level-dependency as an factor, but not as a definitional term.  Can you clarify?  Can you pop over some references where someone may have laid this out in a discussion of signal processing circuits?  In other words, the references I saw seem to suggest that MOST non-linear distortion is level dependent, (still not quite sure what that means), but that you could certainly have non-linear distortion that wasn't level dependent.  Can you clarify?

Maybe it's just an old-fashioned term that's gone out of use lately...

Thank you for your forthcoming well-reasoned reply!
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 15:08:22
Wikipedia covers the topic in depth: http://en.wikipedia.org/wiki/Distortion (http://en.wikipedia.org/wiki/Distortion)

This is a very fundamental audio/electrical engineering topic...
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 15:28:54
So, basically "a digital console or workstation" is not "ALL digital consoles or workstations", but only the specific ones that only do summing, gain changes, and...panning.  Every OTHER digital console is not included in your term?
Equalization is also done with multiplication and addition. My point is that there's not any unintentional non-linearities in any properly-implemented digital console. There are no compromises designers need to make. There are no minute imperfections or any non-ideal behavior in the way DSPs do the math.

Consoles can do non-linear processing (e.g. dynamic range compression, distortion plugins...). You turn that on and you're telling the console you don't want linear behavior - it does what you ask.


Ok, that's a good answer.

But I'm wondering.  You seem to be making a very sweeping and absolute statement here.  The way I'm reading this, you aren't saying anything about audibility or "as far as is reasonably necessary", you're saying flat-out that they are linear, period.  Is that a correct reading of your intent?

Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 15:52:29
The inherent non-linearities of any given electrical system can be quantified as "total harmonic distortion" and "intermodulation distortion", as well as others. I'm not sure where the audibility threshold of THD starts, but it's a measurable characteristic of any electrical system, whether digital or analog.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 16:12:37
Actually, you apparently follow quite well.  I agree completely with your characterization, EXCEPT that you're not describing a real-world system, but a hypothetical one, that you'd be hard pressed to discover in actual use.

With a couple reasonable assumptions around signal level and dither, a digital console or workstation absolutely operates as an ideal linear system.

sez you. the reality is different unless you start piling on the qualifiers such as "within the stated pass band".


A defined pass band is a natural property of *any* real world system. Therefore presuming one does not necessarily require any specific qualifiers at all.

Since we (hopefully) all know that we are talking about audio, we also know that the traditional audio passband is 20Hz - 20KKz.  As a rule, digital systems lack inherent, practically significant LF roll-offs. That leaves only 20 KHz as a possible area of discussion. We just went through a period of weirdness a fw years back where a few people people tried to profiteer by extending the HF passband well above 20 KHz and making false claims about audible benefits. That madness seems to have pretty much evaporated since then, at least in the mainstream. Furthermore, people who actually invested big money in that weirdness, received negligable rewards, millions were lost by a few large corprations, and a few execuitives simply lost their jobs.

So, the reality is not actually any different from what Notat said,  unless I somehow missed something.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 16:18:06
The inherent non-linearities of any given electrical system can be quantified as "total harmonic distortion" and "intermodulation distortion", as well as others. I'm not sure where the audibility threshold of THD starts, but it's a measurable characteristic of any electrical system, whether digital or analog.


Thank you for that.  Not exactly sure how it is relevant, but it's certainly interesting.

I have found a bit of discussion that implies that "linear distortion" is gain and phase anomalies as a function of frequency...i.e. that it is a "systemic" problem independent of program, where "non-linear distortion" is gain and phase anomalies as a function of amplitude...i.e. that is a "systemic" problem that is dependent on program.

I like this definition, because it seems to solve a pervasive and awful problem that I see all over the internet audio forum world...the tendency to bundle, mix, and match various different CLASSES of problems together, ad hoc, to mix-and-match cause and effect, to slur the difference between domain and the value of products.

I think this was the basis for my objections to the Ethan Winer "four measurements".  It wasn't so much that they are wrong, per se, but that each category is a jumbled mis-match of strange-bedfellows.

...and everyone always seems to have their own way of grouping them.

For example, harmonic distortion is not a CAUSE of distortion, it's a product of distortion.

your thoughts?
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 16:20:36
For example, harmonic distortion is not a CAUSE of distortion, it's a product of distortion.
It is neither a cause nor a product of distortion. It is a type of distortion. It is a measurable characteristic of an electrical system.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 16:29:26
First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Not really. The actual assumption is that if someone wants to operate all components in a useful signal chain in their linear region, that is generally practical and feasible.

Everybody who understands gain staging understands that merely operating all components in a useful signal chain can take a little planning and skill.  The ability of unskilled or careless people to make big messes can't be understated.

Quote
Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec.


Not really.

1. Many important properties of much are gear are simply not fully specified.
2. Specified performance is often far better than the minimum required to be effective.
3. Much equipment performs far better than specified, in many ways.
4. The performance of much digital equipment is unbelievbly consistent. For example, noise floors are often created by digital means and are therefore identical for every piece of equipment that works at all.

Quote
In reality this is NEVER the case - there is always a tolerance range.

NO GIVEN PART EVER EXACTLY MATCHES THE SPEC SHEET.


So what? Please see items 1-4 above.

Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 16:33:48
A defined pass band is a natural property of *any* real world system. Therefore presuming one does not necessarily require any specific qualifiers at all.

Since we (hopefully) all know that we are talking about audio, we also know that the traditional audio passband is 20Hz - 20KKz.  As a rule, digital systems lack inherent, practically significant LF roll-offs. That leaves only 20 KHz as a possible area of discussion. We just went through a period of weirdness a fw years back where a few people people tried to profiteer by extending the HF passband well above 20 KHz and making false claims about audible benefits. That madness seems to have pretty much evaporated since then, at least in the mainstream. Furthermore, people who actually invested big money in that weirdness, received negligable rewards, millions were lost by a few large corprations, and a few execuitives simply lost their jobs.

So, the reality is not actually any different from what Notat said,  unless I somehow missed something.



My understanding of the engineering design reason for extended bandwidth systems is really quite different than yours.  Instead of it being some kind of hyperbolic "madness" as you call it, it was really a lot more about improving the linearity of the hardware within the defined passband.

It's no secret that transient response is directly related to bandwidth, after all.  That's been part of signal processing curriculum since about the time they started sending telegraph signals across the country, putting the pony express out of business!

The information and anecdotal stories I've heard point to extended bandwidth having very little to do with the idea of audibility of signals above 20k, but more that having that bandwidth moves a lot of intractable electronic component design problems up out of the pass band...such as filter ripple, slew rate problems, pre-echo of IIR, phase anomalies at the crossover points of filters, etc.

This is the thinking and modus operandi of manufacturers who make gear for serious audio production.  What it translated to in the marketing materials aimed at audiphile home theater listeners, is a world I spend very little time in. 

As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.

Your thoughts?
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 16:44:21
I must confess...I've never heard of "linear distortion".  What is it?  Isn't ALL distortion by definition non-linear?

Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interfering signals.

Maybe you wouldn't classify frequency response errors as "distortion". Not everyone does. It's a just a terminology thing. Nothing to get hung up on.



So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?


Not at all.

I believe that the following reference has just lately been cited, but the question above shows that people must not be taking the citation very seriously:

link to Wikpedia article about distoriton (http://en.wikipedia.org/wiki/Distortion)

So, to repeat something that is pretty fundamental in audio:

1. Linear distoriton - signal processing errors that are commonly quantified by frequency response and phase response estimates. 

2. Nonlinear distoriton - processing errors that are commonly quantified by THD, IM, jitter, and flutter and wow.

3  Noise - random signal source that in nature usually has a thermal origin. Pseudorandom noise is usually the reult of trying to approximate random noise by digital means.

4. Interferring signals - things like power line hum, communication signals, harmonics from switchmode power supplies, etc.

Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 16:47:44
Quote
Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec.


Not really.

1. Many important properties of much are gear are simply not fully specified.
2. Specified performance is often far better than the minimum required to be effective.
3. Much equipment performs far better than specified, in many ways.
4. The performance of much digital equipment is unbelievbly consistent. For example, noise floors are often created by digital means and are therefore identical for every piece of equipment that works at all.



That's a pretty bold statement, pardner!  If I was to post this, you would jump on me and ride me like a circus pony, for offering a bunch of unsubstantiated conjecture.

Would you like to offer a bit of supporting documentation or evidence for this rather bold assertion?

I will offer a bit of my own, to start the ball rolling:  The Creative Labs soundblaster card has recently-published specs that state that the card's A/D and D/A produce THD + IMD + Noise of 0.002%.  By anyone's standards, that's pretty spankin' good!  TWO THOUSANDTHS of ONE PERCENT. 

Wow.

HOWEVER....you get down into the "mice type" at the bottom of the spec sheet, buried in the legal stuff and sales contact information, is a little "qualifier":  the "reference signal" for the specification, is a 1kHz sine wave at full input range level.

In other words, if you want to reproduce 1kHz sines, then BUY THIS CARD.  It will excel.  But if you attempt to put anything remotely resembling a music program through it, well, it's caveat emptor.

Your thoughts?  I'd like to see your examples too!
Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-03-22 16:50:01
As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.

Your thoughts?

If by orthogonal you mean that professional audio equipment must work well and audiophile equipment is used to pick people's pockets then I quite agree. Other than that there should NOT be any great difference between them.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 16:57:36
But I'm wondering.  You seem to be making a very sweeping and absolute statement here.  The way I'm reading this, you aren't saying anything about audibility or "as far as is reasonably necessary", you're saying flat-out that they are linear, period.  Is that a correct reading of your intent?

Yes. Mathematicians are in it for the sweeping and absolute statements. The math doesn't lie (I've shown you the proof) and the processor does the math correctly. The only place audibility is involved is in choosing sample rate (bandwidth) and bit resolution (S/N ratio).
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:01:03
As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.

Your thoughts?

If by orthogonal you mean that professional audio equipment must work well and audiophile equipment is used to pick people's pockets then I quite agree. Other than that there should NOT be any great difference between them.


I mean the word "orthogonal" in the taxonometric sense.  And, I specifically said "market".  A particular piece of equipment is just a particular piece of equipment, and it's suitability of purpose in any given market is defined by the market, not the equipment.  In an absolute sense, it either works, or it doesn't.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 17:09:07
A defined pass band is a natural property of *any* real world system. Therefore presuming one does not necessarily require any specific qualifiers at all.

Since we (hopefully) all know that we are talking about audio, we also know that the traditional audio passband is 20Hz - 20KKz.  As a rule, digital systems lack inherent, practically significant LF roll-offs. That leaves only 20 KHz as a possible area of discussion. We just went through a period of weirdness a fw years back where a few people people tried to profiteer by extending the HF passband well above 20 KHz and making false claims about audible benefits. That madness seems to have pretty much evaporated since then, at least in the mainstream. Furthermore, people who actually invested big money in that weirdness, received negligable rewards, millions were lost by a few large corprations, and a few execuitives simply lost their jobs.

So, the reality is not actually any different from what Notat said,  unless I somehow missed something.



My understanding of the engineering design reason for extended bandwidth systems is really quite different than yours.  Instead of it being some kind of hyperbolic "madness" as you call it, it was really a lot more about improving the linearity of the hardware within the defined passband.


Yes and no.

If it is 1962 and you are Stuart Hegeman and you are designing a Citation II Tubed Power Amp for Dr. S Harmon MD, then you may find that you need to control response up to 50 or 100 KHz in order to have good performance at 20 KHz.
If it is 2010 and you are looking at DSP-based digital  audio hardware to buy, it is entirely possible that equipment with a brick wall filter 100 dB deep at 22 KHz that will have response within 0.1 dB at 20 KHz and its phase response will be "linear phase". IOW, it will behave like a short delay.

Quote
It's no secret that transient response is directly related to bandwidth, after all.


What seems to be a secret is what sort of transient response actually matters for audio. Some people show off the sort of 10 KHz square waves that come out of CD players and want us all to say Yecch! They are misleading people.

Quote
That's been part of signal processing curriculum since about the time they started sending telegraph signals across the country, putting the pony express out of business!


That would relate more to people's thinking before the era of modern psychoacoustics. Read Zwicker and Fastl yet?

Quote
This is the thinking and modus operandi of manufacturers who make gear for serious audio production.


It is the boutique segment of the pro audio world that wants us to look at 10 KHz square waves from CD players and lust after their new "high resolution" toys.  Unfortunately this includes people who should know better.

Quote
What it translated to in the marketing materials aimed at audiphile home theater listeners, is a world I spend very little time in.


To me audio is audio from musican to listener.

Quote
As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.


I was very happy to deconstruct Your last thoughts on that, but I don't think you ever responded.

Did you read it before the grass was cut?

Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:13:39
But I'm wondering.  You seem to be making a very sweeping and absolute statement here.  The way I'm reading this, you aren't saying anything about audibility or "as far as is reasonably necessary", you're saying flat-out that they are linear, period.  Is that a correct reading of your intent?

Yes. Mathematicians are in it for the sweeping and absolute statements. The math doesn't lie (I've shown you the proof) and the processor does the math correctly. The only place audibility is involved is in choosing sample rate (bandwidth) and bit resolution (S/N ratio).



very good.  Now, we've determined that as idealized theoretical systems, the digital console or DAW should be perfectly linear.

Now, this was all in the context of whether a hypothetical system is ever attainable in the real world.  Since no DAW can ever be useful if you can't get audio into the darned thing, then we have to include the A/D converters in the model, no?

You've asserted that the systems "do the math correctly".  If I take that as a given, then I ask you to explain about the anti-aliasing filters on those "pesky" converters.  Specifically, most modern designs don't really use analog brick wall filters anymore...too much ripple and phase non-linearity down into the passband.  The analog filter has been replaced or complemented by a digital filter.  But some digital filters introduce their own problems....pre-echo is one.  IIR filters (implemented in the digital side) have minimized pre-echo, but introduce phase anomalies!

Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter. 

So, I ask...is this important to consider when you make your statement?
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 17:27:08
0.002% is a -94 dB. It is impressive but, as compared to state of the art, not IMPRESSIVE.

Using a 1 kHz stimulus is an accepted means of doing this sort of measurement. The fact that they went to the trouble to describe how they did the measurement puts them above average in this department.

If you want to get a better idea of how equipment will sound with more realistic stimulus, you can do swept versions of the test - plotting distortion vs. frequency and/or amplitude. (Using DSP it is now possible to do these tests with normal program material as the test stimulus. The live sound guys routinely do this to assess acoustic performance of their systems during shows - cool stuff.)

For working equipment, the resulting graphs consume printed space, are boring and not many people know how to read them.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 17:27:41
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.


A/D and D/A converter chips are among the most perfected of all audio components within the scope of their natural use and purpose. Check the spec sheets for TI' s PCM4220 and PCM4222 chips
and the spec sheets for the previous generation of hotties by AKM and Cirrus.

We've had about 5 years of experience with the latter two families of chipt, and they turuned out to be every bit as good as they were claimed to be.

Furthermore, most if not all digital consoles can be operated entirely in the digital domain.  Of course you've heard of the Neumann TLM103D, right?
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-22 17:28:19
Did you read it before the grass was cut?

The trimmings can be found here (http://www.hydrogenaudio.org/forums/index.php?showtopic=79620), btw.  Feel free to quote anything there that is actually on-topic if you wish as there are some things there that were ok.  Unfortunately the stuff that was ok also contained stuff that wasn't (and still isn't!).  Common sense should dictate (hint: ad-hominem is not ok).
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:31:44


Arnold, do you really assert that the "niche" aspect of the market is relevant?  Is this not about objectivity? 

If anything the market for pro audio gear is far far larger than the market represented by fools that will spend $3000 on a damn cable!

I am confused.  On the one hand, I have people posting about the theoretical perfection of digital systems, and on the other, I have you posting that it's all subjective, subject to hearing acuity...and you seem to present yourself as a qualified arbiter of "my" hearing acuity.

One reason that I keep beating this horse, is that the listener market cares about exactly one thing:  moving a single L/R signal through some boxes, and out into air.  (maybe 5.1 signal...),  but the audio production market is concerned with the creation and manipulation of many stages of intermediate products and artifacts, where for the most part, any issues with non-linearity will be additive.  This represents SUCH a fundamental difference that I claim we CANNOT conflate the two! 

It's like claiming that all vodka is just vodka.  If you think that audiophiles will want to draw-and-quarter you for your statements, wait until you try telling a Grey Goose imbiber that "Smirnoff will do".
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 17:33:28
So, basically "a digital console or workstation" is not "ALL digital consoles or workstations", but only the specific ones that only do summing, gain changes, and...panning.  Every OTHER digital console is not included in your term?


Most good digital consoles and DAWs have features that include nonlinear operations. The usual "your gun, your bullet, your feet" warning should apply.

IOW giving a DAW some nonlinear processing features doesn't take it out of our consideration if you turn the nonlinearities on and off at will.

However, complaining about what happens when you intentionally turn the nonlinearities on sort of qualifies you for a seat in the corner with a pointed hat, no?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:39:28
IOW giving a DAW some nonlinear processing features doesn't take it out of our consideration if you turn the nonlinearities on and off at will.

However, complaining about what happens when you intentionally turn the nonlinearities on sort of qualifies you for a seat in the corner with a pointed hat, no?



Well...perhaps.  I'm just wracking my brain to come up with a mix of ANY popular music song that has experienced ANY critical and/or sales acclaim, that just uses gain and pan.

Care to cite one?  two?  If, in the 100,000 songs or so that have been released by a major label for airplay, you can find even THREE that use only gain and pan in their mix, I will gladly concede the point.

So, I guess that I am in a lot of very good company, sitting in that corner!
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:42:32
Furthermore, most if not all digital consoles can be operated entirely in the digital domain.  Of course you've heard of the Neumann TLM103D, right?


You can do bedroom techno and industrial using just digitally created sounds.  Doesn't dilute my point.  Does that microphone make any stop in the voltage/frequency domain, or does it directly create a digital stream from sound pressure gradients?

If yes, then it's just a converter that has been repackaged, no?
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-22 17:44:14
I can't find a question to answer in all this mess. If someone can't understand that, yes, non-linear distortion is impossible to correct when stacked, but it doesn't matter because there's no audible (almost no measurable) non-linear distortion in the equipment we're talking about, where can you go from there?

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 17:44:37
The point Arnold is making is that DAWs only introduce non-linearities when their user tells them to.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 17:45:49
0.002% is a -94 dB. It is impressive but, as compared to state of the art, not IMPRESSIVE.

Using a 1 kHz stimulus is an accepted means of doing this sort of measurement. The fact that they went to the trouble to describe how they did the measurement puts them above average in this department.

If you want to get a better idea of how equipment will sound with more realistic stimulus, you can do swept versions of the test - plotting distortion vs. frequency and/or amplitude. (Using DSP it is now possible to do these tests with normal program material as the test stimulus. The live sound guys routinely do this to assess acoustic performance of their systems during shows - cool stuff.)

For working equipment, the resulting graphs consume printed space, are boring and not many people know how to read them.



Does that spec, as defined and referenced, tell me ANYTHING about how that equipment is going to handle the crack of a snare?

No.  The most I can infer from that spec as defined, is that when I hit that snare drum, the card will PASS SIGNAL.  It makes zero representation of what that signal will actually look like.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 17:45:58
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter. 

So, I ask...is this important to consider when you make your statement?

If you put imperfect signals into a digital console, you'll get a linear combination of imperfect signals coming out. The console can't possibly do any better than this. Once convinced of that, no, it doesn't need to be considered further. Converter performance can be taken as a separate problem.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-22 17:50:48
One reason that I keep beating this horse, is that the listener market cares about exactly one thing:  moving a single L/R signal through some boxes, and out into air.  (maybe 5.1 signal...),  but the audio production market is concerned with the creation and manipulation of many stages of intermediate products and artifacts, where for the most part, any issues with non-linearity will be additive.


I don't understand what you're after, which kind of devices? Especially in the pro audio field, an all digital data path is nothing extra-ordinary. And a digital data path means zero percent signal alteration unless you do processing, and even that error will be a logical (e.g. rounding error) and not a physical property. The only analog elements, that introduce non-linearities, are those that you especially introduce into the audio path for their specific sound signature (as tube compressors). But who in hell would rant about the possible stacking effect by such a device, if you have brought it into the chain especially for its distorting properties?

Does that spec, as defined and referenced, tell me ANYTHING about how that equipment is going to handle the crack of a snare?


Of course, what do you think isn't covered by a common ADC spec with regard to what you want?
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 17:51:05
Arnold, do you really assert that the "niche" aspect of the market is relevant?  Is this not about objectivity?


Common sense says that the presence of a few excepetions doesn not necessarily invalidate the rule.


Quote
If anything the market for pro audio gear is far far larger than the market represented by fools that will spend $3000 on a damn cable!


At some time in the recent past the market for pro audio gear and the market for high end audio were about the same in customers and dollars. AFAIK the revolution in portable digital audio has juggled everything up w/r/t high end and general consumer audio. For example, the market for products that turn an iPod into a stationary player is said to now be about the same size as the market for all other mainstream home audio compoennts.  The actual market for $3,000 cables has always been miniscule AFAIK.

Quote
I am confused.  On the one hand, I have people posting about the theoretical perfection of digital systems, and on the other, I have you posting that it's all subjective, subject to hearing acuity...and you seem to present yourself as a qualified arbiter of "my" hearing acuity.


If you turn off all the EFX a digital console should be very, very perfect and ideal and they seem to actually be that way.  What I'm  posting about subjective and hearing acuity is that the *requirements* for audio performance have to be related to hearing ability.  The digital perfection that is possiblw with a digital console is therefore overkill.

As far as meing being the arbiter of your hearing ability goes, I actually know nothing about your personal hearing ability, except that it is very likely no better than the best that has ever been reliably observed for any hunam being.

Quote
One reason that I keep beating this horse, is that the listener market cares about exactly one thing:  moving a single L/R signal through some boxes, and out into air.  (maybe 5.1 signal...),


Very much a 5.1 signal, since such a high percentage of all stationary and even much mobile listening is actually HT or A/V.

Quote
but the audio production market is concerned with the creation and manipulation of many stages of intermediate products and artifacts, where for the most part, any issues with non-linearity will be additive.


While Ethan and the rest of us are willing to debunk stacking, we're not going after additive distortion. As I watched Ethan's video, I saw a very orthodox presentation about additive distortion.


Quote
This represents SUCH a fundamental difference that I claim we CANNOT conflate the two!


In theory we can't conflate them, but Ethan provided a well-documented orthodox argument that I gathered and presented the evidence for back in 2002 at my now-departed www.pcabx.com web site.  The bottom line is that most good equipment performs so well that any reasonble cascading of audio gear is still unlikely to cause audible problems.

As more and more audio gear gets digial inputs and outputs, we just keep more and more of the processing solidly in the digital domain where there is inherently zero added distortion and zero added noise, unless adding them is exactly what we want to do.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 17:55:29
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 18:03:43
IOW giving a DAW some nonlinear processing features doesn't take it out of our consideration if you turn the nonlinearities on and off at will.

However, complaining about what happens when you intentionally turn the nonlinearities on sort of qualifies you for a seat in the corner with a pointed hat, no?



Well...perhaps.  I'm just wracking my brain to come up with a mix of ANY popular music song that has experienced ANY critical and/or sales acclaim, that just uses gain and pan.


Show me a public archive detailed logs of all processing and equipment settings from rehearsal through tracking, mic to pressed CD,  for a representative selection of popular songs and there is a possibility of a logical conversation. AFAIK nobody is even keeping those logs and if they existed, they would be highly proprietary.  So, if we try to generalize about them, we have a strong possibility of talking out the backs of our necks.

As far as compression and limiting go, I commented on the practical non-reversibility of them here in maybe the last week.

HA Thread about reversing dynamics procssing (http://www.hydrogenaudio.org/forums/index.php?showtopic=79330)

This is all very all OT as compared to Ethan's 2009 AES audio myths presentation.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 18:05:16
Of course, what do you think isn't covered by a common ADC spec with regard to what you want?


Well...last time I looked at a snare drum hit hard, it looked NOTHING like a 1kHz sine waveform.  It had a very fast-rising transient.  It had a vast multitude of enharmonic and inharmonic partials, at least two and maybe 4 fundamental frequencies, a whole lot of high frequency content well past 20k (which we will ignore because arnold said so).

A lot of these characteristics, could be and are damaged by the analog front end stages of that card...maybe once it gets into bits it's perfect, but damn.

I think you've told me all I need to know.  I think I may now understand the context that you are discussing this in.  It doesn't happen to match my personal context, but I think I understand the mapping a bit better!
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 18:05:51
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)


Every contemporary  hiigh performance audio converter chip that I know of is Sigma-Delta.  Several variations on the basic theme exist.
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-22 18:12:12
I think I may now understand the context that you are discussing this in.  It doesn't happen to match my personal context, but I think I understand the mapping a bit better!

Hopefully your personal context doesn't stray too far away from TOS #8 (http://www.hydrogenaudio.org/forums/index.php?showtopic=3974), otherwise this forum requires that you keep it squarely to yourself.

...and yes, there are things in the Audio Myths Workshop video that do not fulfill TOS #8.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 18:17:26
Of course, what do you think isn't covered by a common ADC spec with regard to what you want?


Well...last time I looked at a snare drum hit hard, it looked NOTHING like a 1kHz sine waveform.  It had a very fast-rising transient.  It had a vast multitude of enharmonic and inharmonic partials, at least two and maybe 4 fundamental frequencies, a whole lot of high frequency content well past 20k


That question was resolved about 2 centuries back by a guy named Fourier.

Every band-limited wave  (don't care how high the band limit is, just that it must be finite) can be analyzed and accurately characterized as a collection of sine and cosine waves. 

This isn't just an oddity of math becuase it is now routine to convert audio into one of these collections of sines and cosines, and then convert the collection of sines and cosines back into a regular wave as part of normal digital audio processing.

The reconstructed wave looks right, subtracts well from the origional leaving essentially nothing (you can make the essentially nothing as small as you want by simply using longer data words), and it sounds exactly right to everybody who bothers to do a good sensitive listening test.

BTW, the process is completely linear - intelligent but simple addition & subtration of the sines and cosines. I've done this at clock rates up to 10 MHz, so don't rag on me about ignoring the high frequency components.




Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-22 18:19:43
Of course, what do you think isn't covered by a common ADC spec with regard to what you want?


Well...last time I looked at a snare drum hit hard, it looked NOTHING like a 1kHz sine waveform.  It had a very fast-rising transient.  It had a vast multitude of enharmonic and inharmonic partials, at least two and maybe 4 fundamental frequencies, a whole lot of high frequency content well past 20k (which we will ignore because arnold said so).

A lot of these characteristics, could be and are damaged by the analog front end stages of that card...maybe once it gets into bits it's perfect, but damn.




I know that capturing extreme transients with inherently band passed systems can be tricky without experience. But a sensibly chosen input gain combined with a half-way decent dynamic range in your ADC is usually all that's needed. But that's nothing specific to digital audio.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 18:27:33
Also, I have found, FINALLY, a couple references that actually use the words "linear distortion".  The references seem to be specific to loudspeakers,


It turns out that in the eyes of most orthodox audio authorities, loudspeakers are just about the only remaining kind of audio components where nonlinear distoriton is even interesting. So people who work with loudspeakers tend to be more rigorous about these kinds of definitions because in their work they still deal with a lot of audible distortion of both kinds.

Quote
and seem to anecdotally mention level-dependency as an factor, but not as a definitional term.


Nonlinear distortion is usually level-dependent. There are a few cases where nonlinear distortion is not level dependent (half wave rectification comes quickly to mind) but generally it is level-dependent.



Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 18:38:22
IOW giving a DAW some nonlinear processing features doesn't take it out of our consideration if you turn the nonlinearities on and off at will.

However, complaining about what happens when you intentionally turn the nonlinearities on sort of qualifies you for a seat in the corner with a pointed hat, no?



Well...perhaps.  I'm just wracking my brain to come up with a mix of ANY popular music song that has experienced ANY critical and/or sales acclaim, that just uses gain and pan.


Show me a public archive detailed logs of all processing and equipment settings from rehearsal through tracking, mic to pressed CD,  for a representative selection of popular songs and there is a possibility of a logical conversation. AFAIK nobody is even keeping those logs and if they existed, they would be highly proprietary.  So, if we try to generalize about them, we have a strong possibility of talking out the backs of our necks.



for the purposes of this discussion, I think we need not be quite so rigorous!  For example, reverb will kick it off the list, mixing into compression on the 2-buss, (but let's ignore compression applied in mastering), any major EQ, compression or gating on drums, etc. Compression on lead vocal...  All stuff that you can easily hear even on earbuds from an iPod with an mp3! 

Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 18:42:08
Of course, what do you think isn't covered by a common ADC spec with regard to what you want?


Well...last time I looked at a snare drum hit hard, it looked NOTHING like a 1kHz sine waveform.  It had a very fast-rising transient.  It had a vast multitude of enharmonic and inharmonic partials, at least two and maybe 4 fundamental frequencies, a whole lot of high frequency content well past 20k


That question was resolved about 2 centuries back by a guy named Fourier.

Every band-limited wave  (don't care how high the band limit is, just that it must be finite) can be analyzed and accurately characterized as a collection of sine and cosine waves. 

This isn't just an oddity of math becuase it is now routine to convert audio into one of these collections of sines and cosines, and then convert the collection of sines and cosines back into a regular wave as part of normal digital audio processing.

The reconstructed wave looks right, subtracts well from the origional leaving essentially nothing (you can make the essentially nothing as small as you want by simply using longer data words), and it sounds exactly right to everybody who bothers to do a good sensitive listening test.

BTW, the process is completely linear - intelligent but simple addition & subtration of the sines and cosines. I've done this at clock rates up to 10 MHz, so don't rag on me about ignoring the high frequency components.


yes, quite.  but you're answering a different question.  What, in the specification of a component processing a sine at 1kHz at full range amplitude, can I extrapolate to a full-range complex signal?  Beyond faith, hope, and charity, that is?  (reference to pandora's box is very deliberate!)
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 18:50:58

Ok, so now, by the elite and competent office of Arnold Krueger, Canar, pdg, and googlebot, I think I have arrived at a conclusion.

if:

A) all modern digital equipment is by definition highly linear;

B) highly linear equipment is transparently high fidelity;

C) converters, even cheap ones, are nearing ideal linearity;

D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;

E) published specs represent a median or worst-case, typically, therefore the actual performance is probably even better than the already-non-audible specs...

THEREFORE:

In assembling and setting up a music production system, the notion of "choice" and "preference" are essentially irrelevant, sonically speaking, so basically ANY SYSTEM that I can assemble out of ANY current-vintage gear, that isn't demonstrably broken, will be higher fidelity than I will ever need, or in fact, ever perceive?

Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 18:59:23
D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;
This is the falsest part of your logic.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-22 19:09:32
Last week I wrote:

Am I the only person who has noticed that not one of the nay-sayers has presented a single audio example to prove their point? All they do is try to tear down all of the examples in my video, and call me wrong, but never once have they shown their own example and said what's right.

Dwoz, please post some audio files showing that dither is audible on pop music recorded at sensible levels. Please post an example showing when jitter is audible. Please show us that phase shift can be heard. Please prove with an audio file that stacking is not a myth. And so forth.

Now dwoz writes:

I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum

Coulda woulda shoulda. But never did. This is a big problem with you. It's called "All talk and no action."

Look, whoever you are, I spent half a year preparing for that workshop. I created numerous graphs and drawings and audio examples to prove my points. Then I made a detailed video with all of those examples and highly detailed explanations. So far all I see from you is "You're wrong" with nothing to back it up. Again I ask, where are your examples proving that jitter, dither, usual amounts of phase shift, and stacking etc are an audible problem? Where is your hard proof that more than four parameters are needed to define fidelity? You've wasted hundreds of hundreds of posts around various forums trying to make your points, yet you have not once succeeded. Does that not tell you anything?

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 19:12:32
  • How can you be sure what that specific snare's waveform should look like?
  • If you are sure, how do you know that your mic, its placement and the room's characteristics, is delivering what you expect, but the ADC's "analog front end stage" isn't?
  • Regarding high frequencies. ABX the >20 kHz record against version processed with a high quality low-pass. You would be the first to be able to tell a difference. Don't get anyone wrong, I don't know anybody here, who would promote mixing at 44.1 kHz. As you say, high resolutions eases filtering constraints, and producing at nothing higher than the delivery format is not worth the hassle.


I know that capturing extreme transients with inherently band passed systems can be tricky without experience. But a sensibly chosen input gain combined with a half-way decent dynamic range in your ADC is usually all that's needed. But that's nothing specific to digital audio.


Ok, let me address these one-by-one?

first...how can I tell what the snare "should" look like?  Because I can monitor the snare both out in the room, and through console monitoring before it hits the converters.  I can evaluate whether what came from the mic is usable and desirable, and gauge it's general fidelity and/or acceptability (two different things!).

second...I think I just addressed that.  If I can hear the sound in my monitors, pre-converter, that I heard out in the room, then I know I've captured a usable signal.  If it comes out the other end of the converter somehow different, then I only have one place to point the finger of blame.

third...review my post.  I didn't claim that I could hear >20k stuff, nor was I interested in capturing it.  BUT....you've bothered me with the rest of this point.  Why not mix at 44.1k?  Or record higher?  Arnold Krueger has informed me that modern A/D filters can be 100dB down at 22kHz, and dead linear, phase-frequency-amplitude, at 20kHz.  And once it's in the digital domain, normat and Canat have informed me that the digital system is far closer to ideally linear than will ever matter to a human ear.

So why would I want to mix higher than 44.1k?  EVER?  I know TWO people who would promote mixing at 44.1k....Normat and Canat...and possibly pdb and Arnold Krueger. 

What you've stated in this point, completely invalidates what they're saying.  Am I just misinterpreting it?  Would you care to re-state or clarify?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 19:26:04
Last week I wrote:

Am I the only person who has noticed that not one of the nay-sayers has presented a single audio example to prove their point? All they do is try to tear down all of the examples in my video, and call me wrong, but never once have they shown their own example and said what's right.

Dwoz, please post some audio files showing that dither is audible on pop music recorded at sensible levels. Please post an example showing when jitter is audible. Please show us that phase shift can be heard. Please prove with an audio file that stacking is not a myth. And so forth.

Now dwoz writes:

I can construct a set of files that demonstrate a stacking effect that cannot be removed via an inverse function on the sum

Coulda woulda shoulda. But never did. This is a big problem with you. It's called "All talk and no action."

Look, whoever you are, I spent half a year preparing for that workshop. I created numerous graphs and drawings and audio examples to prove my points. Then I made a detailed video with all of those examples and highly detailed explanations. So far all I see from you is "You're wrong" with nothing to back it up. Again I ask, where are your examples proving that jitter, dither, usual amounts of phase shift, and stacking etc are an audible problem? Where is your hard proof that more than four parameters are needed to define fidelity? You've wasted hundreds of hundreds of posts around various forums trying to make your points, yet you have not once succeeded. Does that not tell you anything?

--Ethan



Ethan...I'm not anonymous.  I'm not even famous, and my competence is certainly open to question if not interpretation.

But this isn't about ME.  I don't matter.

You tried to do something very good and commendable.  I applaud that.  But when you open the door of the steel rebar shark cage and go swimming with the big fishes, you have to make sure your kit is in order.

My only concern, is that in 5 years this stuff starts coming back and turning into "settled fact", when in reality it is not quite settled.  Your video has lots of GREAT stuff in it, and yet just enough stuff that's problematic, to cause a "fail".  It only takes one turd in the pool, to cancel swimming lessons for the rest of the day.

Here's what I suggest:  Consider this whole process to be a form of peer review.  Socratic method, all that kind of jazz.  Take this challenge, and apply it to what you've done in that video, and come back with something that leaves me with no other option but to shut up and say "yup."

By the way...I didn't say that different dithers were audible.  What I said was that different dithers are chosen based on the downstream processing that will occur.  Some kinds of dither are "fragile" with subsequent processing.

(by the way...about that whole, ad hominem thing....)
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 19:49:27
D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;
This is the falsest part of your logic.


2. Specified performance is often far better than the minimum required to be effective.
3. Much equipment performs far better than specified, in many ways.
4. The performance of much digital equipment is unbelievbly consistent. For example, noise floors are often created by digital means and are therefore identical for every piece of equipment that works at all.


Canar...can you tell me whether your statement agrees with Arnold's?  They seem to be inconsistent to me.  I was basing my point on Arnold's statement.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 19:59:35
THEREFORE:

In assembling and setting up a music production system, the notion of "choice" and "preference" are essentially irrelevant, sonically speaking, so basically ANY SYSTEM that I can assemble out of ANY current-vintage gear, that isn't demonstrably broken, will be higher fidelity than I will ever need, or in fact, ever perceive?


For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

No lack of challenges in any of those areas!
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-22 19:59:50
first...how can I tell what the snare "should" look like?  Because I can monitor the snare both out in the room, and through console monitoring before it hits the converters.  I can evaluate whether what came from the mic is usable and desirable, and gauge it's general fidelity and/or acceptability (two different things!).


You did not answer how you know what the waveform should look like but with what you have heard. You have either overdriven your ADC, the ADC sucks badly, or you are one of a kind. Level matched, double blind comparisons of halfway decent ADC/DAC combos vs. straight wires usually know only one result: inability to differentiate.

We are both just two random, anonymous guys on the internet. If I would know you in person I would challenge you for $1000 bucks that you're not that one of a kind. I know, you are probably sure, that everything is decent and setup correctly. But I have been there, too - and I swallowed the pill. The brain is a master at changing actual perceptions by context.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 20:01:59
THEREFORE:

In assembling and setting up a music production system, the notion of "choice" and "preference" are essentially irrelevant, sonically speaking, so basically ANY SYSTEM that I can assemble out of ANY current-vintage gear, that isn't demonstrably broken, will be higher fidelity than I will ever need, or in fact, ever perceive?


For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

No lack of challenges in any of those areas!



Can I take this then as "yes"?
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 20:06:23
first...how can I tell what the snare "should" look like?  Because I can monitor the snare both out in the room, and through console monitoring before it hits the converters.  I can evaluate whether what came from the mic is usable and desirable, and gauge it's general fidelity and/or acceptability (two different things!).


You did not answer how you know what the waveform should look like but with what you have heard. You have either overdriven your ADC then or you are one of a kind. Level matched, double blind comparisons of halfway decent ADC/DAC combos vs. straight wires usually know only one result: inability to differentiate.

We are both just two random, anonymous guys on the internet. If I would know you in person I would challenge you for $1000 bucks that you're not one of a kind.


I guess the answer is that I "look" with my ears.  If I am standing in the room with the instrument, and hear that...and move to the control room, and audition the mic on the monitors, and I hear "that"....and then I audition the return from the converter and it's "that"...then we're good, right?  If there's a difference anywhere, I have to ponder why, and figure out how to fix it.

And you're quite right, if you drive the poor thing over a cliff, it will crash.

So...in an ABX, I should be unable to tell the difference between a soundblaster and a bare wire...right?  If I'm actually human, that is...
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 20:12:34
You tried to do something very good and commendable.  I applaud that.  But when you open the door of the steel rebar shark cage and go swimming with the big fishes, you have to make sure your kit is in order.


Two points.

One is that believe it or not, the womb does not contain any big fishies of the kind that Ethan has not already dealt with.

The second is that there was at least one fishie on stage with Ethan, and had Ethan said anything unorthodox, that fishie would have eaten Ethan alive on the spot - the fishie being James Johnson (JJ).

Quote
My only concern, is that in 5 years this stuff starts coming back and turning into "settled fact", when in reality it is not quite settled.


The presentation was at an AES mneeting, which has additional large fishies in it that would have been happy to race JJ for pieces of Ethan to gnaw on.

Quote
Your video has lots of GREAT stuff in it, and yet just enough stuff that's problematic, to cause a "fail".


Like Ethan I'm looking for the stuff that is problematical. Where's the beef?


Quote
Here's what I suggest:  Consider this whole process to be a form of peer review.  Socratic method, all that kind of jazz.  Take this challenge, and apply it to what you've done in that video, and come back with something that leaves me with no other option but to shut up and say "yup."


I see no challenge.

Quote
By the way...I didn't say that different dithers were audible.  What I said was that different dithers are chosen based on the downstream processing that will occur.  Some kinds of dither are "fragile" with subsequent processing.


New Science, anybody?  If everything downstream is doing its job, there is no such thing as fragile or durable dither. If you have concerns about downstream processing, you just use more dither than the bare minimum.

Also pehaps you mssed the discusison about self-dithered program material?  It is for real. I first encountered it in a digital transcription of a 1/2" 15 ips stereo tape.  The person who did the transcription provided digital files of the same tape transcribed using various kinds of dither. In most cases there was no discernable change to the noise floor because of the relatively large amounts of analog tape noise and other environmental noise that was already there.

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 20:14:28
THEREFORE:

In assembling and setting up a music production system, the notion of "choice" and "preference" are essentially irrelevant, sonically speaking, so basically ANY SYSTEM that I can assemble out of ANY current-vintage gear, that isn't demonstrably broken, will be higher fidelity than I will ever need, or in fact, ever perceive?


For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

No lack of challenges in any of those areas!



Can I take this then as "yes"?


Yes to what? Your comment did not break the equipment down into types.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 20:22:50
So...in an ABX, I should be unable to tell the difference between a soundblaster and a bare wire...right?  If I'm actually human, that is...


In general, yes. Some of us have done the actual comparison.

ABX is easy to do if you have a PC with a really good sound card and a good software ABX comparitor.

There are a number of freely-downloadable ABX comparators.

If you have concerns about soundblasters, then get a PC with an audio interface that is really good, a lot better than a Soundblaster. Maybe a LynxTWO, an M-Audio Delta 24192, or an Emu 1616.  Or the high end outboard converter of your dreams and any card with digital I/O which includes some SoundBlasters.

Ethan's web site has some files to download and play with that essentially do a lot of the legwork for you. Or do your own legwork. Its just some simple work with a good DAW.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-22 20:23:56
So...in an ABX, I should be unable to tell the difference between a soundblaster and a bare wire...right?  If I'm actually human, that is...


Soundblaster is quite old, maybe not. But a modern, consumer Realtek HD audio codec, at half the price of what Soundblaster cards used to cost, is probably able to do it.

That is for one pass. For a large number of loop backs higher priced gear does probably make a difference.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-22 20:30:13
dwoz, you seem very eager to take generalizations we are making and try turn them into absolutes. Of course there's crappy hardware you can find where noise floors are going to be audible, or THD is audible, or has fault X. The big problem is that a lot of the alleged differences between hardware evaporates when people actually test it. Furthermore, just because a difference is inaudible doesn't mean it's not measurably different either.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-22 20:35:36
dwoz, you seem very eager to take generalizations we are making and try turn them into absolutes. Of course there's crappy hardware you can find where noise floors are going to be audible, or THD is audible, or has fault X. The big problem is that a lot of the alleged differences between hardware evaporates when people actually test it. Furthermore, just because a difference is inaudible doesn't mean it's not measurably different either.



Thank you, Canar.  Whew, that was a lot of work!  I appreciate your patience with me.  This stuff can really get you wrapped around the axle if you're not careful!
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-22 20:36:24
For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

Through measurement, you do find that the majority of the infidelity is where you say, in traducers and acoustics. To say that problems in the electronics are insignificant in comparison is like saying a 3 kHz tone is insignificant in comparison to higher level background noise. We are are very good at hearing past acoustics and through transducer imperfections. In many cases these effects/imperfections are euphonic. It is not insane for recording engineers and audio enthusiasts to pay attention to details several orders of magnitude below what you would consider to be the primary imperfections. One man's imperfection is another man's character. Remember that there's art and science in what we do. Those who have an appreciation of both are going to be the most successful.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-22 21:12:40
Your video has lots of GREAT stuff in it, and yet just enough stuff that's problematic, to cause a "fail".

You've been saying that forever, but so far you have not proven one thing wrong in my entire video. Please either show your own examples proving me wrong, or stop saying I'm wrong. And please stop mis-quoting me for crying out loud.

Quote
Consider this whole process to be a form of peer review.

My peers have already reviewed it and for the most part seem satisfied. They're trying hard to explain it to you - the guy who says he doesn't matter. And you'd be correct that you don't matter if you weren't blabbing "Ethan is wrong" without proof all over every forum where my video is being discussed.

Quote
Some kinds of dither are "fragile" with subsequent processing.

Either provide hard proof with compelling audible examples, or stop calling me wrong. It's getting very tiresome, and not only to me. If you don't understand something in my video, please ask and I'll explain it to you. If you still don't understand, I'll try to explain it again. Simple, yes?

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-22 21:18:00
Folks,

There are many examples of people mis-quoting me, and others digging through my video to find what exactly was said and where. So I just uploaded the video script with timing marks so anyone can download it. It's on the same page as all the audio files that accompany my video:

AES Workshop Video Files (http://www.ethanwiner.com/aes/)

I hope this helps people on both sides of the discussion.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 23:08:15
For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

Through measurement, you do find that the majority of the infidelity is where you say, in traducers and acoustics.


So far so good.

Quote
To say that problems in the electronics are insignificant in comparison is like saying a 3 kHz tone is insignificant in comparison to higher level background noise.


I don't get that at all. The harmonics and IM products that are created by transducers are basically the same as those generated by electronics except that the transducers make far more of them and start creating them at far lower levels.

Quote
We are are very good at hearing past acoustics


So far so good.

Quote
and through transducer imperfections.


My experiences say not at all.

Quote
In many cases these effects/imperfections are euphonic.


My experiences say not at all.

Quote
It is not insane for recording engineers and audio enthusiasts to pay attention to details several orders of magnitude below what you would consider to be the primary imperfections.


I don't get that at all.

Quote
One man's imperfection is another man's character.


I'm not buying any of that, either.

What is true is that 40% nonlinearity is an organ pipe is different than 40% nonlinearity in a woofer because an organ pipe makes only one tone at a time, while a woofer makes multiple tones at the same time. Single tone = no IM. Multiple tones = IM.

HOw nonlinearity in an organ pipe is different from nonlinearity in a guitar amp is demontrated when one plays multiple notes at the same time. In the organ, multiple tones means multiple pipes, one tone per pipe. In an electric guitar, it all goes through the same woofer and reducing nonlinearity in tha woofer is of the essence. Playing just one string at a time is not uncommon on a bass guitar, which has the practical effect of reducing IM. When multiple tones are played on a bass guitar their frequencies are often far enough apart that only one of them is actually in the range of greatest nonlinearity which also reduces IM. In your Hifi, you can't count on any of those things happening, so having a reasonbly linear woofer can be very important.

As far as so-called euphony in tubed hi fi amps goes, it turns out that linear distortion due to interactions between the amps high source impedance and speaker impeadance variations is likely the most obvious audible effect.

Quote
Remember that there's art and science in what we do. Those who have an appreciation of both are going to be the most successful.


I agree with the idea that recording is both art and science, but the room for selling distortion as art goes downhill very fast on the reproduction side.

I record and listen all of the time. I'm constantly changing the linear distortion I add on the record side, but I rarely have the need to change it on the playback side.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-22 23:35:00
Quote
Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec.


Not really.

1. Many important properties of much are gear are simply not fully specified.
2. Specified performance is often far better than the minimum required to be effective.
3. Much equipment performs far better than specified, in many ways.
4. The performance of much digital equipment is unbelievbly consistent. For example, noise floors are often created by digital means and are therefore identical for every piece of equipment that works at all.



That's a pretty bold statement, pardner!



Umm, may I hunbly point out that it is actually 4 statements?

Quote
If I was to post this, you would jump on me and ride me like a circus pony, for offering a bunch of unsubstantiated conjecture.


I aquired that informaiton the old fashioned way, I listened and measured until I had it.

Quote
Would you like to offer a bit of supporting documentation or evidence for this rather bold assertion?


From maybe 1996 till about 2007 I ran a web site named www.pcavtech.com that showed detailed technical measurements of over 100 pieces of pro and consumer gear.  I posted a link to it here just lately. Seems like another waste of my time. Check out www.goback.com

Quote
I will offer a bit of my own, to start the ball rolling:  The Creative Labs soundblaster card has recently-published specs that state that the card's A/D and D/A produce THD + IMD + Noise of 0.002%.  By anyone's standards, that's pretty spankin' good!  TWO THOUSANDTHS of ONE PERCENT. 

Wow.

HOWEVER....you get down into the "mice type" at the bottom of the spec sheet, buried in the legal stuff and sales contact information, is a little "qualifier":  the "reference signal" for the specification, is a 1kHz sine wave at full input range level.


That is pretty much a stadard way to do things.

Quote
In other words, if you want to reproduce 1kHz sines, then BUY THIS CARD.  It will excel.  But if you attempt to put anything remotely resembling a music program through it, well, it's caveat emptor.


You are being paranoid.

Remember:

1. Many important properties of much are gear are simply not fully specified.
2. Specified performance is often far better than the minimum required to be effective.
3. Much equipment performs far better than specified, in many ways.
4. The performance of much digital equipment is unbelievbly consistent. For example, noise floors are often created by digital means and are therefore identical for every piece of equipment that works at all.

Quote
Your thoughts?  I'd like to see your examples too!


One reason why I dropped PCAVTech is that so many other people were doing a better job.  There is a little freeware program called the "Audio Rightmark" that runs a pretty complete set of tech tests on sound cards. This means that anybody who is barely competent with a computer and has an appropriate set of audio interconnects can test whatever audio interface is at hand. People are doing this all over the world, and with a little searching you can come up with the results of Audio Rightmark tests that were performed on just about anythng that has been sold.

So google up the Audio Rightmark test(s) that have been run on this card and you will no doubt find out far more information about its performance.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 00:27:36
I guess the answer is that I "look" with my ears.  If I am standing in the room with the instrument, and hear that...and move to the control room, and audition the mic on the monitors, and I hear "that"


Now I know for sure that you are not an experienced recordist at all.

If you have any hearing acuity at all, the live sound in the studio is *never* anything like what you'll hear in the control room. I've been in some pretty wonderful studios, with incredible mics, console, and monitors and no it wasn't at all the same.

Dotto for live recording outside the studio.

Quote
....and then I audition the return from the converter and it's "that"...then we're good, right?  If there's a difference anywhere, I have to ponder why, and figure out how to fix it.


You've obviously never done this either, especially level-matched. The input and output from any halfways-decent converter pair are as alike as the provebial peas in the pod, only closer.

BTW listening to the before and after a ADC/DAC pair was the essence of the JAES article that debunked hi resolution recordings for many. The so-called hi rez recordings that were being auditioned were made at clock rates up to 192K, but the ADC/DAC pair were running at 44.1K. Nobody heard no differnces no how. Compared to your average recroding conference eggspurt, the JAES listening comparison was level-matched and double blind.  Much of what you read on those places is about bias, not reliable listening.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-23 09:11:44
I guess the answer is that I "look" with my ears.  If I am standing in the room with the instrument, and hear that...and move to the control room, and audition the mic on the monitors, and I hear "that"


Now I know for sure that you are not an experienced recordist at all.



Um.

Arny.

It is possible, perhaps, is it not, to have a good idea what a mic will do if you have a lot of experience, yes?
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-23 10:15:48
As more and more audio gear gets digial inputs and outputs, we just keep more and more of the processing solidly in the digital domain where there is inherently zero added distortion and zero added noise, unless adding them is exactly what we want to do.
You're never going to get away with saying that (since any digital processing inherently adds noise - albeit below the original noise floor, given enough bits).

Cheers,
David.

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 12:15:58
As more and more audio gear gets digital inputs and outputs, we just keep more and more of the processing solidly in the digital domain where there is inherently zero added distortion and zero added noise, unless adding them is exactly what we want to do.

You're never going to get away with saying that (since any digital processing inherently adds noise - albeit below the original noise floor, given enough bits).


The first problem here is that the baseline for evaluating digital processing is not zero noise. The processing *must* happen, its either going to happen in the analog domain or the digital domain. The question is not whether or not digital processing adds noise, but whether it adds more or less noise than the analog alternative. Because of the inherent noise and distortion that analog circuitry adds, it never has infinite resolution. The noise and distortion is inherently limited by things like thermal noise.

If you put a signal into the digital domain, it need not have any noise added to it while it is there. You are just significantly constrained as to what processing you do to it while it is there. You can do a few useful things, but not everything you might want to do.

If you want freedom of choice as to what processing you do, then you still have the option of doing that processing with arbitrary levels of precision. The only inherent limit to precision in the digital domain is that the precision must be finite.  You can have as much precision as you can line up digital hardware to implement it. The price/performance of digital processing is great and continues to improve at a rapid rate.  This compares with  the analog domain where thermal noise is always down there smiling up at you from the bottom of an irreducable well that isn't all that deep.

Let me put this into a real world perspective. With a little care in gain staging, one can mix signals in an analog console and maintain more than 90 dB dynamic range. Any outboard processing you do has similar limits, somewhere between 80 and 100 dB.  With a digital console that moves up to well beyond 120 dB. Digital outboard processing has similar limits.

Frankly, if dynamic range were the only issue there would be no problem with analog consoles. With all their inherent and likely faults they can still be used effectively. The first 3 reasons why I favor digital consoles has nothing to do with sound quality for simply mixing signals. 



Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 12:20:31
It is possible, perhaps, is it not, to have a good idea what a mic will do if you have a lot of experience, yes?


Of course. The difference between what you hear in the studio and the control room need not be surprising or unexpected. It will be a surprise to anybody who doesn't expect it. It will be a frustration to anybody who thinks it should not be there. But, it is always there and the mic(s) is not the only reason. That's why it is good to work in familiar circumstances. You  know what differences to expect and accept, and which differences represent problems that you need to address.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-23 13:43:52
The first problem here is that the baseline for evaluating digital processing is not zero noise.
but you said there was zero noise. Which is untrue. Then posted five paragraphs to try to dig yourself out.

I know we both understand the issues very well indeed - you probably even better than me  - but if you're going to say things that are simply untrue, and then write five paragraphs which don't include the words "I wrote the wrong thing" (because you're incapable of every being wrong, even when you are) you're going to give these audiofools a field day. And probably turn HA into r.a.o in the process.

btw, I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Tahnru on 2010-03-23 14:48:16
btw, I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.


I believe they can be found in this video (http://www.youtube.com/watch?v=BYTlN6wjcvQ) at 21:34

They are:


[EDIT] Corrected the time to be more accurate [/EDIT]
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-23 16:47:34
OK, I'll bite.

It says "the four audio parameters", not "measurements".

I think Frequency response needs to explicitly include amplitude and phase.

I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories. Even so, I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

Far more important IMO is that this list implies an oversimplification that doesn't hold in the real world - just because the effect of any "fault" falls into one of these four categories doesn't mean there are four measurements that can catch any fault. Ethan doesn't say this of course - there are two specific measurements listed under the single category of distortion, for example.

My point is this: we generally use measurements tailored to the specific faults we expect to find - "tailored" both in terms of revealing them, and in terms of giving us data in a domain and form that makes some sense, or reveals something useful. I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future. I doubt it, but I'm fairly sure that any comprehensive attempt to get close will need more than four.


Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated. This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-23 17:44:41
I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.

Me too!
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-23 17:57:28
I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories.

Yes, that's a good way to put it.

Quote
I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

I don't consider reverb effects in my four parameters because, at heart, reverb is an "external effect" that happens acoustically in enclosed spaces. Yes, it can be emulated by hardware and software devices, so you can still assess frequency response and distortion.

Quote
put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

I now realize I should have added a disclaimer in my video about lossy compression. My Audiophoolery (http://www.ethanwiner.com/audiophoolery.html) and Audiophile beliefs (http://www.ethanwiner.com/believe.html) articles, on which my video is based, mention excluding lossy compression:

Quote
tests have shown repeatedly that most modern gear has a frequency response that's acceptably flat (within a fraction of a dB) over the entire audible range, with noise, distortion, and all other artifacts well below the known threshold of audibility. (This excludes products based on lossy compression such as MP3 players and satellite radio receivers.)

I'll let others who are more expert than me explain what is "left out" in lossy files. (I'll guess it's frequency response that changes dynamically.) But clearly, a delay of any type will fall under time-based errors.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 18:14:34
D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;
This is the falsest part of your logic.


I guess if you can't support your unsupported assertion with any relevant facts then I don'thave to take the time to debunk it.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 18:17:57
The first problem here is that the baseline for evaluating digital processing is not zero noise.


but you said there was zero noise. Which is untrue. Then posted five paragraphs to try to dig yourself out.


I said that there was zero added noise.  I can't believe you're holding me responsible for noise that came in the input terminals.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 18:26:15
btw, I'm still hoping for a discussion of the "four measurements" proposed by Ethan, carried out here at HA under HA rules.


I believe they can be found in this video (http://www.youtube.com/watch?v=BYTlN6wjcvQ) at 21:34

They are:
  • Frequency Response
  • Distortion - THD, IMD, aliasing "birdies"
  • Noise - hiss, hum & buzz, vinyl crackles
  • Time-based Errors - wow, flutter, jitter


[EDIT] Corrected the time to be more accurate [/EDIT]



Well Ethan, since your list of 4 is different from my list of 4, we could discuss *that*.

My opening shot woudl be that your list is incomplete since it doesn't explicitly mention static phase shift, and it splits up nonlinear distortion into two separate catagories.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-23 18:32:24
I said that there was zero added noise.


That's also not true. Almost every modification (ex volume changes by multiples of 2) of digital samples add noise. The amount is very low, but not zero. There was a thread recently about exactly how low.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-23 18:54:56
your list is incomplete since it doesn't explicitly mention static phase shift

Understand that my "list" is meant mainly as broad categories of the parameters that affect audio reproduction. Static phase shift would fall under time-based errors, since some frequencies exit the output terminals at a different time than other frequencies.

Quote
it splits up nonlinear distortion into two separate catagories.

There are even more than that if we include aliasing and jitter and truncation distortion, in addition to IM and THD. And of course there's overlap, since wherever you have THD you also have IMD (except maybe in contrived cases). But all of these still fall broadly under "distortion" IMO, unless you can think of a better classification.

The next step is probably to devise a list of all the subsets of each parameter. I listed a lot in my video and in my Audiophoolery (http://www.ethanwiner.com/audiophoolery.html) article, but there are certainly more. For example, I omitted crosstalk as a subset of noise in the original article, so I added that just yesterday. But I'm glad to discuss the broad category headings if you can think of any I missed. Or we can discuss other ways to think of this. If not four broad categories with subsets, what makes more sense? Again, my point is not to list all the specific measurements needed for audio gear, but just to define what parameters affect the sound.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 19:02:13
I think it's probably true to say that any audio "fault" can be characterised as falling within one of these categories, or even more accurately, that any underlying fault will cause an effect that falls into one of those categories.


I agree with that in principle. We can call them categories of faults, or categories of errors.

One of the mistakes that has been made by people who misunderstand Ethan's list is to equlate categories of fault with measurements. The basic misapprehension that these critics have is there need only be one measurement to fully charaterize a given kind of fault, which is not exactly true.

In fact you can measure comon instances of all four kinds of of faults with just one measruement (e.g. multitone).  The reverse is also true - it can take more than one measurement to characterize complex faults.

Quote
Even so, I'm not sure where reverb would fall into this.


If reverb is due to a linear process and it usually is,  then it is a form of linear distortion.  Reverb is usually the result of delaying the signal, possibly filtering it with a linear filter, and then linearlly adding it back to itself. The delay is a special case of phase shift.

Quote
It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.


Reverb does show up in a FR measurment, usualy as some kind of comb filtering effect.

Quote
Far more important IMO is that this list implies an oversimplification that doesn't hold in the real world - just because the effect of any "fault" falls into one of these four categories doesn't mean there are four measurements that can catch any fault.



Quote
Ethan doesn't say this of course - there are two specific measurements listed under the single category of distortion, for example.


This is why I list 2 different kinds of distortion, linear and nonlinear.  I further gave examples of both kinds of distortion.  There are actually two kinds of linear distoriton - amplitude modulation distoriton and frequency modulation distortion. THD and IM measure amplitude modulation nonlinear distortion, while jitter, flutter and wow measure frequency modulation nonlinear distortion.

Quote
My point is this: we generally use measurements tailored to the specific faults we expect to find - "tailored" both in terms of revealing them, and in terms of giving us data in a domain and form that makes some sense, or reveals something useful.


That is more habit and cutom than necessity.  Our ability to analyze signals shot up rapdily when we started doing the analysis with computers.  If you study the more recent literature of audio measruements there have been a number of papers discussing nwere approaches. Papers by Gene Czerwinwski and Richard Cabot come quickly to mind.

Quote
I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future.


The answer is generally yes.  Old relics like THD and IM are artifacts of the days when only very simple equipment was available to generate test signals and analyze them.

A great deal can be determined with no specific test signal at all - there is readily available software that analyzes both linear and nonlinear distortion by automatically developing linear and nonlienar models of the system under test. Mathematically, this is called identification. 

Several programs measure linear transfer functions including SMAART.

The essence of Klippel's speaker distortion measurement system is the mathematical process of parameter indentification by comparing observations the operation of the real system and a model with various test signals.  The software just tunes the parametrs of the model until it works like the real system.

Quote
Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?


The random delay can be measured by the usual means for measuring FM or phase distortion. I am unfamilir with lossywave.  However I reject this line of argumentation because it is an intellectual game that sheds littls light on the problems we need to solve in the real world. 

Quote
If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated.


Well Dr. cure yourself. You played that game a number of times in just this post. You made unfounded assertions.

Quote
This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.


Well then leave the tricks, riddles, and unfounded assertions at the door.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 19:20:57
your list is incomplete since it doesn't explicitly mention static phase shift

Understand that my "list" is meant mainly as broad categories of the parameters that affect audio reproduction. Static phase shift would fall under time-based errors, since some frequencies exit the output terminals at a different time than other frequencies.


The obvious clieaving point for distortion is linear distortion versus nonlinear distortion.  Linearity is well-defined and understood by many. One way to look at the situaion is to say that distortion is anything that changes the shape of a wave, as opposed to simply changing the  size of the wave which is called amplification and it not distoriton. Linear distoriton is any distortion that obeys the rules of linear functions. Nonlinear distorion is any distoriton that does not obey the rules of linear functions. Short, sweet, and intutive.  It turns out that frequency response curves describe what happens when there is linear distortion. There is always a change to the shape of a wave when it passes through something with nonflat frequency response or tha has any phase shift. However, if you have something that lacks nonlinear distortion, then if you put in twice as much,you get twice as much out.

Another way to divide up linear and nonlinear distortion is to observe that applying linear distoriton to a signal never ever adds any new frequencies to the signal. Liner distortion only changes the amplitude and phase of the sognals that are already there.  Conversely, nonlinear distotion always adds new frequencies - we often call them sidebands or sum or difference tones.

Quote
Quote
it splits up nonlinear distortion into two separate catagories.

There are even more than that if we include aliasing and jitter and truncation distortion, in addition to IM and THD. And of course there's overlap, since wherever you have THD you also have IMD (except maybe in contrived cases). But all of these still fall broadly under "distortion" IMO, unless you can think of a better classification.


No, jitter is just a subset of nonlinear distortion.  It follows the rule of adding new frequencies to the source signal.


Quote
The next step is probably to devise a list of all the subsets of each parameter. I listed a lot in my video and in my Audiophoolery (http://www.ethanwiner.com/audiophoolery.html) article, but there are certainly more. For example, I omitted crosstalk as a subset of noise in the original article, so I added that just yesterday.


From the stadpoint of the signal that is actually contaminated by crosstalk, crosstalk is one of my interferring signals. It is lsomething ike noise, only it is deterministic.

Quote
But I'm glad to discuss the broad category headings if you can think of any I missed. Or we can discuss other ways to think of this. If not four broad categories with subsets, what makes more sense? Again, my point is not to list all the specific measurements needed for audio gear, but just to define what parameters affect the sound.


It's really a matter of picking the sets and subsets and giving them logical names. In fact this has been going on for decades. If we wanted to, we could just inform ourselves about the existing literature of audio and raise ourselves up on the shoulders of giants.

For example the difference bewteen linear and nonlinear distortion was described in detail in a paper by Pries, back in  1976.

D. Preis, "Linear Distortion," J. Audio. Eng. Soc., vol. 24, pp. 346—367, June 1976.

In turn, this paper has a bibliography going back years and years.

And for a more modern discussion of the same general topic:

Audibility of Linear Distortion with Variations in Sound Pressure Level and Group Delay Geddes, Earl R.; Lee, Lidia W. AES Convention:121 (October 2006)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 19:49:20
I said that there was zero added noise.


That's also not true.


It is true for a useful but not totally complete range of operations.  This includes some immensely valuable biggies such as storage and transmission of data.

Furthermore, you can do whatever you want and add as little noise as you want by simply increasing the width of the data path for the calculations. 

Quote
Almost every modification (ex volume changes by multiples of 2) of digital samples add noise. The amount is very low, but not zero. There was a thread recently about exactly how low.


Not true for a range of operations including the types of editing that were all that we had with magnetic tape.

Why obsess over 0.00001 dB increases in a noise floor?

Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-23 20:11:51
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-23 20:26:50
I said that there was zero added noise.

Why obsess over 0.00001 dB increases in a noise floor?


Arny.....have you been doing 32 bit floating point math on an Intel chip again?

Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-23 20:43:22
Furthermore, you can do whatever you want and add as little noise as you want by simply increasing the width of the data path for the calculations.


That's true, but not zero and that's what you have said.

Not true for a range of operations including the types of editing that were all that we had with magnetic tape.


Simple volume changes and mixing already add more than zero noise.

Why obsess over 0.00001 dB increases in a noise floor?


Nobody is obsessed with negligible amounts of added noise. 2bdecided just made a good point, don't feed the trolls by making false generalizations. Small is not equal to zero and there is no need to call it zero, when you are trying to fight off other people for making false generalizations.

I know we both understand the issues very well indeed - you probably even better than me  - but if you're going to say things that are simply untrue, and then write five paragraphs which don't include the words "I wrote the wrong thing" (because you're incapable of every being wrong, even when you are) you're going to give these audiofools a field day. And probably turn HA into r.a.o in the process.
Title: AES 2009 Audio Myths Workshop
Post by: krabapple on 2010-03-23 21:24:16
dwoz, you seem very eager to take generalizations we are making and try turn them into absolutes. Of course there's crappy hardware you can find where noise floors are going to be audible, or THD is audible, or has fault X. The big problem is that a lot of the alleged differences between hardware evaporates when people actually test it. Furthermore, just because a difference is inaudible doesn't mean it's not measurably different either.



Thank you, Canar.  Whew, that was a lot of work!  I appreciate your patience with me.  This stuff can really get you wrapped around the axle if you're not careful!


Really, is this all you were waiting for? 

That modern hardware within certain functional classes needn't sound different, but might sound different, depending on quality of implementation? 

That it's best to determine such difference with controlled listening tests, rather than assume X and Y sound different?

That measured differences aren't always audible?





Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-23 22:21:55
You're making some dangerous assumptions here.

First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Let's take a specific example - op-amps in an analog mixer.  When they are operated out of their linear region, in clipping, the result is very bad sound.  Op-amps have very low distortion up to clipping because of high feedback.  But as soon as clipping occurs, all bets are off because the clipping is so abrupt.  So clipping an op-amp is a pretty terrible error, but may go unnoticed if it only occurs for a brief instant.

Second you're assuming that all production examples of a specific part will always adhere strictly to their published spec. In reality this is NEVER the case - there is always a tolerance range. NO GIVEN PART EVER EXACTLY MATCHES THE SPEC SHEET.


Not sure where you got that one.  It wasn't from my post, as I said nothing even resembling that.  Spec sheets give a range of values, so there is literally no concept of "EXACTLY MATCHES THE SPEC SHEET".  In the absence of a failed part, they should within the tolerances specified by the spec sheet, provided they are tested in the same way as the spec sheet.

Not all analog mixers are op-amp based. Many of the better ones use discrete circuitry. In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps. Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-23 23:02:41
So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?

Are you being purposely obtuse or are you really not that smart?

This isn't exactly rocket science you know!

Linear distortion is any change in the signal that is not level dependent.

Non-linear distortion IS level dependent.


Wait a minute.

Distortion is, by definition, non-linearity.

So you've got "linear non-linearity" and "non-linear non-linearity"?

If the non-linearity does not change regardless of level it's linear? Isn't that an oxymoron?

Methinks that what Arnold said was probably not exactly what Arnold meant to say.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-23 23:26:54
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-23 23:43:23
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.



I was way more bothered by the way he ascribed a quasi-normal distribution to deterministic and periodic causation.  my leg is sore!  let go!
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-23 23:55:30
I'm so impressed by the words you are using. You must be very smart.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-23 23:56:27
First, you're assuming that all components will always be operated strictly within their linear region which in pro audio is not always true.


Not really. The actual assumption is that if someone wants to operate all components in a useful signal chain in their linear region, that is generally practical and feasible.

Everybody who understands gain staging understands that merely operating all components in a useful signal chain can take a little planning and skill.  The ability of unskilled or careless people to make big messes can't be understated.

That's not what I was referring to. In professional audio production certain types of gear are frequently run outside their linear operating area to produce certain effects. This is most common, but by no means limited to, systems that contain electromagnetic components such as transformers and audio tape, which are frequently operated outside their linear area to produce saturation effects that include compression and euphonious harmonic distortion.

So, in fact "The actual assumption is that if someone wants to operate all components in a useful signal chain in their linear region, that is generally practical and feasible." is erroneous in the context of real world audio production - that may not give the desired effect.

Audio production is a very different matter than audio reproduction - it is best to bear the differences in mind.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-24 00:12:15
I'm so impressed by the words you are using. You must be very smart.



nah. not even close.


If I was smart, I'd be filthy rich, and you'd be flaming one of my personal assistants instead of me.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-24 00:15:54
I propose that we lump all 'distortions, noise, etc' under one roof, that of "phase uncompensated, frequency shaping uncompensated, mean-square-error".

And if that's good to 120dB, then I really don't give two (*&(&*.


Keeping frequency response variations 120 dB down is pretty much impossible in the analog domain.


True.
So, let's even give you +- .1 dB from the original. How's that?
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 01:00:12
As I've said before, I think it is a fundamental mistake to conflate the professional audio production market with the audiophile market...the needs and goals are entirely orthogonal to each other.

Your thoughts?

If by orthogonal you mean that professional audio equipment must work well and audiophile equipment is used to pick people's pockets then I quite agree. Other than that there should NOT be any great difference between them.


Not exactly.

Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 01:04:26
But I'm wondering.  You seem to be making a very sweeping and absolute statement here.  The way I'm reading this, you aren't saying anything about audibility or "as far as is reasonably necessary", you're saying flat-out that they are linear, period.  Is that a correct reading of your intent?

Yes. Mathematicians are in it for the sweeping and absolute statements. The math doesn't lie (I've shown you the proof) and the processor does the math correctly. The only place audibility is involved is in choosing sample rate (bandwidth) and bit resolution (S/N ratio).

But errors can creep into the math because computational systems are not perfect and additional errors can be intriduced in the conversion process, which is also not perfect.

So while you may be correct in theory reality may differ. This is an ongoing problem when trying to discuss audio with mathemeticians.......
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-24 01:09:13
Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...
Title: AES 2009 Audio Myths Workshop
Post by: Light-Fire on 2010-03-24 01:16:31
...Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.


It is more like the opposite of what you said.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 01:36:12
...Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different things in different circumstances.

It's rather like the difference between a camera and slide projector and an artist's palette and set of paint brushes.


It is more like the opposite of what you said.

Really?

So the creation of an audio work, assembled from a number of different sources, some of which are generated by a microphone (which does not pick up sound in an identical way to the human auditory system) picking up the sound created by an acoustic source in ways that are drastically affected by microphone type and placement (no human listener listens with his ear one inch away from a spot on a speaker or drumhead, and sound quality is greatly affected by relative location of the mic to stringed or wind instruments from a distance), and some of which may be purely electronically generated, all of which are then subjected to various forms of electronic processing before being mixed together and re-recorded according to the aesthetic judgment of the engineer and producer - this in what way has anything to do with the accurate reproduction of an original audio source? It's a TOTALLY ARTIFICIAL PERFORMANCE - there is no original source.

If you're recording a classical orchestra, yes, you have an acoustic original, which is why I said audio production entails different things in different circumstances. But even in that case, what you get recorded is not identical to the original performance and never will be, it is still an artistic representation. In this case, perhaps closer to a photograph than an oil painting, but a representation nonetheless.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-24 02:17:31
Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...


Uh, that's what it is SUPPOSED to be, Canar.

Not all horses win the race, of course.
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-24 04:10:16
Not all analog mixers are op-amp based. Many of the better ones use discrete circuitry. In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps. Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


An op-amp is a design topology and does not in any way imply an IC.  There are discrete op-amps as well as IC ones.  Solid-state feedback power amplifiers use the op-amp topology.  Nobody makes a class A IC op-amp because of power dissipation considerations.  For the discrete case, many class A op-amps exist.  And by putting a DC constant-current load at an IC op-amp output, one can easily make it class A up to a current limit.
Title: AES 2009 Audio Myths Workshop
Post by: andy_c on 2010-03-24 04:23:08
That's not what I was referring to. In professional audio production certain types of gear are frequently run outside their linear operating area to produce certain effects. This is most common, but by no means limited to, systems that contain electromagnetic components such as transformers and audio tape, which are frequently operated outside their linear area to produce saturation effects that include compression and euphonious harmonic distortion.


The discussion here is supposedly about the "stacking" portion of Ethan's video, in which he's referring to mixers only.  Clipping the summing portion of a mixer should not be a normal state of affairs.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 05:52:40
I don't understand what you're after, which kind of devices? Especially in the pro audio field, an all digital data path is nothing extra-ordinary.

If by "pro audio" you mean either professional recording of commercial releases  I would beg to differ with you - it is absolutely extra-ordinary. Virtually all productions of this category have at least some analog devices in the signal chain. The only ones that don't are semi-professional dance music productions and soundtracks for commercials that use exclusively virtual instruments and do not feature vocal performances.

In fact, in any case where part or all of the original performance is acoustical in nature (vocals, non-virtual instruments, etc.) an all digital data path is an impossibility.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 06:23:41
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)

Ok, I read over your Wiki layman's reference but I don't see anything there that would lead one to believe that any existing ADC is, in fact, perfect. In fact I saw  things that would lead me to believe the opposite. ADCs incorporate matrices of resistors and/or capacitors to produce the conversion; they are also driven by some sort of physical clock. Given that no resistor, capacitor, or electronic clock known to man is in fact perfect it would follow that devices incorporating these components are also not perfect. If this is not the case, please enlighten me as to why?

As far as I know, Nothing is perfect - and it's the only thing that is!
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 06:36:01
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)


Every contemporary  hiigh performance audio converter chip that I know of is Sigma-Delta.  Several variations on the basic theme exist.


As far as I know, DSD converters are not Sigma-Delta. Somebody correct me if I'm wrong........
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 06:46:00
I think I may now understand the context that you are discussing this in.  It doesn't happen to match my personal context, but I think I understand the mapping a bit better!

Hopefully your personal context doesn't stray too far away from TOS #8 (http://www.hydrogenaudio.org/forums/index.php?showtopic=3974), otherwise this forum requires that you keep it squarely to yourself.

...and yes, there are things in the Audio Myths Workshop video that do not fulfill TOS #8.


Which is why it is so utterly infuriatingly frustrating to attempt to carry on any rational discussion of this topic on this site, and why the discussion is per se biased in Ethan's favor - you allow him to present HIS "illegal" arguments, but the opposition is not allowed to reply in kind. Not fair. No disrespect intended, but I'm tearing my hair out here!
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 07:06:30
D) pretty much all modern equipment has published specs that are testably well-below audibility for distortion, euphonic or otherwise;
This is the falsest part of your logic.

How so? Please elucidate.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 07:17:36
first...how can I tell what the snare "should" look like?  Because I can monitor the snare both out in the room, and through console monitoring before it hits the converters.  I can evaluate whether what came from the mic is usable and desirable, and gauge it's general fidelity and/or acceptability (two different things!).


You did not answer how you know what the waveform should look like but with what you have heard. You have either overdriven your ADC, the ADC sucks badly, or you are one of a kind. Level matched, double blind comparisons of halfway decent ADC/DAC combos vs. straight wires usually know only one result: inability to differentiate.

We are both just two random, anonymous guys on the internet. If I would know you in person I would challenge you for $1000 bucks that you're not that one of a kind. I know, you are probably sure, that everything is decent and setup correctly. But I have been there, too - and I swallowed the pill. The brain is a master at changing actual perceptions by context.


Ah, the key word - USUALLY. If what you're attempting to claim were in fact true the word would not be "usually, it would be 'ALWAYS".

And it isn't.

Because some listeners can in fact differentiate some equipment, and it only takes one listener who can consistently differentiate to prove the point that there is an audible difference.

It's not a question of statistics.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 11:35:09



Linear distortion is any change in the signal that is not level dependent.




This disagrees with the formal definition of linear distortion.

Linear distortion is distortion that does not add any new frequencies to the signal. FM distortion is nonlinear distortion but its effects on the signal can be level-indepdendent.

Quote
Non-linear distortion IS level dependent.


Again false, for the reason I just gave.

Quote
Distortion is, by definition, non-linearity.


False again.

As Wikipedia says: "A distortion is the alteration of the original shape (or other characteristic) of an object, image, sound, waveform or other form of information or representation."  This is as opposed to simply making it larger. I guess it is ironic or maybe a truism that most distortion is an undesirable by-product of changing the size of signals.

Thus there is properly such a thing as linear distortion. A linear distortion changes signal's shape, but does not add any new frequency components to it.

The difference between linear signal processing and nonlinear signal processing is whether or not new frequencies are added to the signal.

Of course, to understand the implications of adding new frequencies to signals it greatly helps to understand that signals themsevles are composed of one or more frequencies. I think many regulars here get this, but some of our visitors don't.

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 11:48:57

Consumer audio equipment ("audiophile" or not) is intended for the accurate reproduction of a prerecorded work. Well, hopefully accurate, anyway.

Professional recording equipment is intended for the euphonious creation of an audio work of art, which is not at all the same thing and may, in fact, entail different thingsw in different circumstances.
This is a wonderful, romantic perspective on recording equipment. If only it were true...


Uh, that's what it is SUPPOSED to be, Canar.

Not all horses win the race, of course.


I agree with the spirit of JE's comment, but things can get a little complex in the actual execution.

For example, a somwhat knowlegable but still naive person might think that an accurate loudspeaker is one that is not only free of audible nonlinear distortion, but  has flat response, linear phase, and omnidirectional disperson.

In fact we know enough about speakers that approach the purported *ideal* of a speaker that is not only free of audible nonlinear distortion, but  has flat response, linear phase, and omnidirectional disperson, that we know that such a speaker is actually pretty nasty to listen to in virtually any real-world listening room.

We furthermore know that the freedom from audible nonlinear distoriton and linear phase are very hard to achieve, but are  generally really good ideas. We also know that within bounds the linear phase is of far lesser importance.

The observed nasty sound from the purported ideal speaker is due to the way that the flat response and omnidirectional disperson interact with just about any room but an anechoic chamber.

In fact the easiest speakers to get along with have carefully shaped disperson.  Furthermore, as you move out into the reverberent field, flat response is also pretty nasty sounding. It is espeically bad in very large rooms.

Earl Geddes says that we are going to be forced to listen to speakers with audible nonlinear distortion for quite some time, so that it is of the essence to learn how to manage it.

The bottom line is that simple definitions of accuracy may themselves be inaccurate.



Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-24 11:50:28
Uh, that's what it is SUPPOSED to be, Canar.

Not all horses win the race, of course.
Pfeh, I misread it.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-24 11:53:05
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)
Ok, I read over your Wiki layman's reference but I don't see anything there that would lead one to believe that any existing ADC is, in fact, perfect.
A thing does not have to be perfect to form a perfect idealized model of it. Science is noisy and full of error. However, generally the error behaves according to some model. It is not difficult, for example, to measure the spectrum of the noise floor of a given ADC.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 12:04:03
Not all analog mixers are op-amp based.


Agreed. However some very highly respected mixers (e.g. classic Neve) have made very heavy use of op amps.

IME, aversion to op amps traces back to the little dust-up we had in the 70s about an obsolete concept called "Slew rate distortion".  The goal posts have moved since then, and we now thow and kick very different balls.

Quote
Many of the better ones use discrete circuitry.


Again, that's probably far more style than substance. The lowest distortion op amps around are probably ICs.  In fact purveryers of discrete op amp replacements are not always forthcoming about how their products perform vis-a-vis the best chips.

Quote
In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps.


Again, there's no logical reason for the obsession with class A amplifiers with regard to signal handling. 

We still use discrete op amps for high power levels. Most if not all modern linear (as oppossed to switchmode) power amps are basically just really big op amps.

Quote
Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


Many op amps are class AB which means that they are  class A when driving high impedance loads. Also, connecting a resistor from the output of an op amp to one of the power supply rails will force the op amp to run class A over a wider range of loads and signals.

I attribute the fascination with class A amplfiiers to a linguistic oddity - class A also means "of the highest caliber" in American English.

As an aside, for a few months this fall I was the unintentional owner of an essentially new (NOS) Pass SA4e which is allegedly a class A power amp. I've listened to it and I've had it on my test bench. I compared it to a Behringer A500 which is in many ways a pretty close comparison. I kept the A500 and sold the SA4e.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 12:17:59
This disagrees with the formal definition of linear distortion.
We've done this one before, haven't we?

Creating new frequencies is an effect of non-linear processing - it's not the definition.

The definition is really simple: a linear system is one that can be described by a linear equation.

A non-linear system is one that isn't linear.

http://en.wikipedia.org/wiki/Nonlinear_system (http://en.wikipedia.org/wiki/Nonlinear_system)

In audio, we're conventionally generous in the definition - the addition of uncorrelated noise doesn't count as non-linear.

We're also conventionally rather non-rigorous - there are many things you can do which are tricky or virtually impossible to write as an equation - we don't demand that someone actually writes the equation - we just accept that, whatever it is, it won't be a linear equation, so therefore the system won't be linear.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 12:38:58
I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

I don't consider reverb effects in my four parameters because, at heart, reverb is an "external effect" that happens acoustically in enclosed spaces. Yes, it can be emulated by hardware and software devices, so you can still assess frequency response and distortion.
I was thinking about the effect you sometimes get with valve amplifiers, where the metal in the valves sings along with the music. Replace the speakers with an 8 ohm resistor, crank up the volume, put your ears (not too) close to the valves, and you can hear it. Also, if you use the amplifier normally, and tap the valves, you can hear the tapping through the speaker. These two effects together suggest to me that, under certain circumstances, some valve amps will act as little reverb chambers. I've no idea if it's audible. I suspect it could be caught by a frequency response plot, but might just fall within what most people would judge to be an "acceptable deviation" - while in practice it might not be "acceptable" because the "deviation" occurs so long after the original sound.

I wonder too if the effect mightn't be slightly more widespread. Certainly the resonant effects of various speaker materials, fed a signal via digital correction for frequency and phase response, still leave more temporal smearing at "resonant" frequencies than "dead" frequencies. You can see it on the waterfall plot. I'm not sure which category this falls into.

Quote
Quote
put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?

I now realize I should have added a disclaimer in my video about lossy compression. My Audiophoolery (http://www.ethanwiner.com/audiophoolery.html) and Audiophile beliefs (http://www.ethanwiner.com/believe.html) articles, on which my video is based, mention excluding lossy compression:
Thank you for the links. I shall try to take the time to read them before responding again, but...

I hope you don't think I'm being too harsh, but this renders the whole exercise a bit meaningless for me. It's turning from "this characterises any audio component" to "this characterises any audio component, except the ones it doesn't". There's a problem: who is to decide which ones it doesn't characterise?

Or to put it another way, I spot a circular argument looming. But I need to read what you've said properly to be sure.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-03-24 12:53:17



Linear distortion is any change in the signal that is not level dependent.




This disagrees with the formal definition of linear distortion.

Linear distortion is distortion that does not add any new frequencies to the signal. FM distortion is nonlinear distortion but its effects on the signal can be level-indepdendent.

It seems to me that the classification of FM distortion as nonlinear is rather arbitrary.

Your other exception, half-wave rectification, is a special case where the non-linearity occurs only at zero signal level, making it a linear distortion.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 12:59:25
This is why I list 2 different kinds of distortion, linear and nonlinear.
Can you post your list here textually for clarity please?

I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future.
The answer is generally yes.
OK - noting that you said "generally", so there must be exceptions, what is this list of measurements?


Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?


The random delay can be measured by the usual means for measuring FM or phase distortion. I am unfamilir with lossywave.  However I reject this line of argumentation because it is an intellectual game that sheds littls light on the problems we need to solve in the real world. 

Quote
If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated.


Well Dr. cure yourself. You played that game a number of times in just this post. You made unfounded assertions.

Quote
This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.


Well then leave the tricks, riddles, and unfounded assertions at the door.
You are again adopting an unhelpful and unwarranted style of discussion.

When I wrote my post, I hadn't seen that Ethan had excluded lossy codecs. Given the nature of this discussion forum, and the topic we're now discussing, it was natural to assume that these "audio parameters" should go some way to describing lossy codecs (and anyway, they do!).

My question about lossyWAV isn't "an intellectual game" - it is an audio component that needs to be quantified. In its current design (i.e. without noise shaping; without spectral processing) it is likely that any objective measurements are more "useful" than those you get from, say, mp3. Not entirely useful, but somewhat useful.

In any case, in my mind, what we're trying to get to (you may disagree) is some kind of set of measurements which allow us to say an audio component is "transparent". Do X Y Z measurements, check the results lie within such-and-such a range, and if so, the audio component is "transparent" to human ears under normal (e.g. non-deafening!) use.

When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent. It would be useful (not essential, but useful) if, in the absence of psychoacoustics, audio components that fail the test aren't transparent (in the manner revealed by the tests). It is essential that any audio component which passes the tests is transparent.

Therefore, in answer to the question "is this audio component transparent", false negatives are OK*; false positives are not:

Further, we can only answer the question at all in-as-much as we can make all these measurements.

So it's essential to discover exactly what these measurements are, in what circumstances can we make them, and in what circumstances can we not.

The only other confounding factor I can think of (there may be others) is that there may be so many false negatives (even with traditional equipment) that such tests aren't that useful without human interpretation - the useful pass/fail answers only coming from more complex analysis.

With loudspeakers (which I fear we must exclude completely), we'll probably just find that non are transparent, and it's a tricky judgement call to know which one is "best" - especially between two models which achieve a reasonable frequency response. (I won't even dare to mention the polar response - transducers really are a special case!).

Cheers,
David.

P.S. * =  inevitable with psychoacoustics, unless you use human ear models in the tests - and then people can question the model.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 13:11:31
This disagrees with the formal definition of linear distortion.
We've done this one before, haven't we?

Creating new frequencies is an effect of non-linear processing - it's not the definition.



The creation or non-creation of new frequecies is the biggest part of the definition of linear and/or nonlinear in a number of audio texts, some of which I've already gvein proper footnotes for.

Quote
The definition is really simple: a linear system is one that can be described by a linear equation.

A non-linear system is one that isn't linear.


The above are known as  circular defintions.

A circular definition is defined as a definition that uses the word being defined as part of the defiinition.

Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-24 13:31:15
A definition doesn't become circular just because two sentences contain the same term.

The definition was indeed very precise. Deriving a to be defined term from an already well defined one, as linear equations are, is not circular.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 13:32:00
When I wrote my post, I hadn't seen that Ethan had excluded lossy codecs. Given the nature of this discussion forum, and the topic we're now discussing, it was natural to assume that these "audio parameters" should go some way to describing lossy codecs (and anyway, they do!).

My question about lossyWAV isn't "an intellectual game" - it is an audio component that needs to be quantified. In its current design (i.e. without noise shaping; without spectral processing) it is likely that any objective measurements are more "useful" than those you get from, say, mp3. Not entirely useful, but somewhat useful.

In any case, in my mind, what we're trying to get to (you may disagree) is some kind of set of measurements which allow us to say an audio component is "transparent". Do X Y Z measurements, check the results lie within such-and-such a range, and if so, the audio component is "transparent" to human ears under normal (e.g. non-deafening!) use.


That's all fine and good. However, the SOTA of audio measurements is that we have a pretty good understanding about how to characterize the sound quality of a wide range of more traditional audio components, while another range of newer kinds of audio components are far more difficult to deal with. IOW, anybody who wants to conflate lossy encoders and power amplfiiers is ignoring this well-known fact.

Anybody who wants to break into every discussion of traditional audio components and burden it with the new (actually now about 20 years old) problems related to lossy perceptual coding components does so at their own risk. If they make their problems into problems for everybody, then guess what the people who at least have part of their lives in some kind of order are going to do?

Quote
When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent.


The most common situation is the reverse. A component that does perceptually-justified lossy coding will often measure well in accordance with traditional measurements. It may still sound pretty bad.

One of the big problems is that many people don't know what the actual performance requirements are for traditional components. And, they don't know how actual commercial products stack up by those measures.  HA has been swimming in that  in depth for several days.

One rule of thumb is that perceptual components have to first pass the usual bank of traditional measurements.

We have had similar but lesser problems with loudspeakers.  For years people said "How can we stand to listen to loudspeakers that measure to badly, when we have amplifiers we hate that measure so much better?" Then I invented ABX and we found out how much prejudice and misinformation were affecting the general perceptions of audio ampliifer performance.

Of course when people don't keep their audio knowlege up to date and say all sorts of unusual things about simple stuff like linear and non-linear we aren't going to get anywhere. That was all settled over 30 years ago. I documented it here in the past day. Who read my references?

Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-24 13:35:04
The above are known as  circular defintions.

A circular definition is defined as a definition that uses the word being defined as part of the defiinition.

It is only circular if tracking it down leads you in a circle. In this case, it does not. If you look up "linear equation" you get a mathematical definition (http://en.wikipedia.org/wiki/Linear_equation). If it were circular, you'd get referred back to "linear system".
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 13:46:10
This disagrees with the formal definition of linear distortion.
We've done this one before, haven't we?

Creating new frequencies is an effect of non-linear processing - it's not the definition.

The definition is really simple: a linear system is one that can be described by a linear equation.

A non-linear system is one that isn't linear.

http://en.wikipedia.org/wiki/Nonlinear_system (http://en.wikipedia.org/wiki/Nonlinear_system)


The actual source says far more than has been quoted above:

"In mathematics, a nonlinear system is a system which is not linear, that is, a system which does not satisfy the superposition principle, or whose output is not proportional to its input."

Quoting the above  sentence in just 7 words, some paraprhased, does not seem to correspond to the highest standards of intellectual practice. :-(

The definition offered as an alternative, being that a nonlinear system is one that creates new frequencies, is at least consistent with the complete sentence from the allegedly cited source.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-24 13:54:57
I hope this does not get deleted for being off-topic or ad hominem. That is not my intention. I think it is is justified, when a thread gets flooded with massive amounts of partly very well founded (why they probably aren't be deleted) but more and more rhetorical posts, to comment on that.

Arnold, I have unidirectionally known you for a couple of years. And I always enjoyed how you were able to fight for objectivity even when surrounded by the worst nuts one can think of. And often you have won and it was great. But I'm really somewhat saddened to see what you are doing here. You don't bring the issue forward anymore, but posts large amounts of word play, nit picking, and rhetorical generalizations. That's totally unnecessary and doesn't present you in the light you should be showing in.

Your experience and merit for the cause are unquestioned. I would love to see, that you list the parameters, you think are relevant regarding transparency with a short description, in a compact post and then we discuss that and leave the rest away.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 13:56:58
That's all fine and good. However, the SOTA of audio measurements is that we have a pretty good understanding about how to characterize the sound quality of a wide range of more traditional audio components, while another range of newer kinds of audio components are far more difficult to deal with. IOW, anybody who wants to conflate lossy encoders and power amplfiiers is ignoring this well-known fact.
Either these measurements can guarantee an audio component is transparent, or they can't. You don't get to choose which components they apply to, and which they don't - otherwise you're saying "I know component X doesn't have any of distortion type Y, so I'm not even going to measure it" - I can think of examples where you'd be on fairly safe ground, but as a general principle, can't you see how ridiculous this is?

Quote
Anybody who wants to break into every discussion of traditional audio components and burden it with the new (actually now about 20 years old) problems related to lossy perceptual coding components does so at their own risk. If they make their problems into problems for everybody, then guess what the people who at least have part of their lives in some kind of order are going to do?
I don't know Arny - what are you doing to do? Keep pretending you have a set of measurements up your sleeve which will guarantee transparency exactly and only when you say they do?

Quote
Quote
When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent.
The most common situation is the reverse. A component that does perceptually-justified lossy coding will often measure well in accordance with traditional measurements. It may still sound pretty bad.
The frequency response should be fine, but the Belcher intermodulation test will usually reveal how much junk it's adding about 30dB down - even though it's inaudible junk.

Quote
One rule of thumb is that perceptual components have to first pass the usual bank of traditional measurements.
You can't say that - you haven't defined this "usual bank" of measurements yet.

I can't believe you have the fundamental misunderstanding that psychoacoustic based codecs pass all traditional objective measurements. They add bucket loads of noise/distortion, and fairly basic tests (e.g. subtracting the input from the output and looking at the residual) can reveal this.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 13:58:08
I would love to see, that you list the parameters, you think are relevant regarding transparency with a short description, in a compact post and then we discuss that and leave the rest away.
+1
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 13:58:15
It seems to me that the classification of FM distortion as nonlinear is rather arbitrary.


Prove that it is linear.

Using the Wikipedia definition:

"In mathematics, a nonlinear system is a system which is not linear, that is, a system which does not satisfy the superposition principle, or whose output is not proportional to its input."

FM distortion does not satisfy the superpostion prinicple, and the output of a FM modulation process is not proportional to its input. For example, if you double the amplitude of the modulating waveform, the output waveform does not double in amplitude. Also true for amplitude modulation distortion.

Quote
Your other exception, half-wave rectification, is a special case where the non-linearity occurs only at zero signal level, making it a linear distortion.


Same 2 problems. Half wave rectification does not satisfy the superposition principle, and its output is not proportional to its input for negative inputs, that is inputs that drop below the zero line.

Title: AES 2009 Audio Myths Workshop
Post by: Speedskater on 2010-03-24 14:19:32
Bob Pease sometimes writes about vacuum tube operational amplifiers (op-amp came later) at Philbrick. And how in 1958 they designed a transistor operational amplifier.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 14:27:13
That's all fine and good. However, the SOTA of audio measurements is that we have a pretty good understanding about how to characterize the sound quality of a wide range of more traditional audio components, while another range of newer kinds of audio components are far more difficult to deal with. IOW, anybody who wants to conflate lossy encoders and power amplfiiers is ignoring this well-known fact.


Either these measurements can guarantee an audio component is transparent, or they can't.


The range of audio components has gotten to be too large for it too be practical to demand that degree of universality at this time. Perceptual coders are orders of magnitude more complex than power amplifiers. There didn't used to be any perceptual coders at all.

Quote
You don't get to choose which components they apply to, and which they don't


I foresee a day when we have one-size-fits all audio measurements that are so easy to use that we just use something incredibly complex (by 2010 standards) for even simple tasks. That day is still coming. We've got work to do today!

Quote
otherwise you're saying "I know component X doesn't have any of distortion type Y, so I'm not even going to measure it"


Yes I am, but only justified by practical limitations that are likely to change, just not real soon.

Quote
Quote

Anybody who wants to break into every discussion of traditional audio components and burden it with the new (actually now about 20 years old) problems related to lossy perceptual coding components does so at their own risk. If they make their problems into problems for everybody, then guess what the people who at least have part of their lives in some kind of order are going to do?


I don't know Arny - what are you doing to do? Keep pretending you have a set of measurements up your sleeve which will guarantee transparency exactly and only when you say they do?


They are not just up my sleeve. I used to have them posted on my web site, but someone rolled them up into a little program that just about anybody can freely download and use.  I'm trying to tell you what they are and what their domain is.

Quote
Quote

Quote

Quote

When psychoacoustics are involved, an audio component may fail this test, and yet still be transparent.

The most common situation is the reverse. A component that does perceptually-justified lossy coding will often measure well in accordance with traditional measurements. It may still sound pretty bad.


One rule of thumb is that perceptual components have to first pass the usual bank of traditional measurements.


You can't say that - you haven't defined this "usual bank" of measurements yet.


I have. Two words: Audio Rightmark.


<remainder of post unanswered due to arbitrary conference limits on quoting>

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 14:34:08
Bob Pease sometimes writes about vacuum tube operational amplifiers (op-amp came later) at Philbrick. And how in 1958 they designed a transistor operational amplifier.


I've seen that Philbrick tubed equipment in actual use. I maintained military radars that were based on electromechanical and tubed analog computers, but the parts were custom made for the purpose.

The first book I ever read about analog computers was full of all sorts of information about philbick tubed op amps and how to use them inanalog comptuers. There are/have been schematics of them on the web.

Like many things, the SS upgrades were far more useful. Incredibly higher performance and far more reliable. I cut my teeth on programming analog computers that were the first and second generation discrete SS versions.

A typical discrete SS op amp was a circuit board about 5 inches wide and maybe 10 inches long.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 14:37:08
I would love to see, that you list the parameters, you think are relevant regarding transparency with a short description, in a compact post and then we discuss that and leave the rest away.
+1



I did it in the past few days and even added detailed descriptions for them:

Starting over again...

(1) linear distortion

(2) nonlinear distoriton

(3) Random noise

(4) Deterministic interferring signals
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-24 14:59:57
I'm pretty much with 2BDecided here.

In an absolute sense a component or protocol has a particular transfer function (or set of transfer functions) between input and output.  There exists an objective measure of what that is.  There's no reason to qualify that, beyond perhaps applying a proper scale to the measurement suite, to appropriately show relevant detail.

Naturally, things like lossy perceptual encoding will drop into their own class, compared to things like mic preamps or line amps or turntables or ADCs. 

The measurement doesn't DEFINE the audibility, it provides a means to QUANTIFY audibility.  Just because we need to apply a different interpretation for things like perceptual encoding, doesn't mean we don't use the same measurements?

For example, the lossy encoder will "measure" poorly, but audition very well.  that tells me something very interesting, which is that the lossy encoded program is appropriate for end-listener distribution, but perhaps less appropriate as the intermediate artifact in a music master production(on a case-by-case basis).  At the very least, using an mp3 in a mix will require some compensations compared to it's lossless counterpart.

Of course, this COMPLETELY ignores the "needs" of the marketing and sales department.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 16:19:21
I've snipped the deeply embedded quotes - you'll have to read back to get context. Basically Arny claiming that lossy codecs pass all traditional measurements, me saying they don't, but it's a silly argument because Arny hasn't defined what "traditional measurements" he's talking about, then...

I have. Two words: Audio Rightmark.

Audio Rightmark certainly does include measurements that reveal the noise/distortion introduced by mp3 encoders.

See here for mp3 vs wav:

http://www.jensign.com/RMAA/ZenXtra/Comparison.htm (http://www.jensign.com/RMAA/ZenXtra/Comparison.htm)

Scroll down to THD and IMD graphs - quite revealing.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-24 16:31:36
Either these measurements can guarantee an audio component is transparent, or they can't.
The range of audio components has gotten to be too large for it too be practical to demand that degree of universality at this time. Perceptual coders are orders of magnitude more complex than power amplifiers. There didn't used to be any perceptual coders at all.

Quote
You don't get to choose which components they apply to, and which they don't
I foresee a day when we have one-size-fits all audio measurements that are so easy to use that we just use something incredibly complex (by 2010 standards) for even simple tasks. That day is still coming. We've got work to do today!

Quote
otherwise you're saying "I know component X doesn't have any of distortion type Y, so I'm not even going to measure it"
Yes I am, but only justified by practical limitations that are likely to change, just not real soon.

OK, but lossy codecs don't fall into this category - it far too easy to measure their faults, not too difficult. See my previous post - in answer to the "is it transparent?" question, false negatives are OK - it's false positives that, I would hope, you can avoid.

Are you telling me that you can't propose a set of measurements, together with a set of "pass/fail" criteria, than guarantee transparency for any arbitrary audio component that can be measured?

You see, I'd have thought that was possible.

In fact, I'd have thought that it was trivial. Let me help:

Take the time domain signal. Subtract the input from the output. The result must be less than -120dB wrt full signal level.

There you go. No false positives. A few false negatives .

Once we start down this route, we can figure out how far we can relax the requirements, which other measurements are needed if we do, which ones interact (i.e. each individual measurement might not have a binary pass/fail, but some may work in combination), etc etc etc.

This, to my mind, is a useful approach. You might actually get somewhere.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-24 17:13:17
the metal in the valves sings along with the music ... tap the valves, you can hear the tapping through the speaker.

I suppose you could call that reverb, but I call it ringing because it has a single dominant frequency. Either way, it's one very good reason to avoid tubes in all audio gear.

Quote
I hope you don't think I'm being too harsh, but this renders the whole exercise a bit meaningless for me. It's turning from "this characterises any audio component" to "this characterises any audio component, except the ones it doesn't". There's a problem: who is to decide which ones it doesn't characterise?

It's not meaningless IMO. The whole point of the "four parameters" is to define what affects audio reproduction. This word is in the title of my AES Workshop (http://www.aes.org/events/127/workshops/session.cfm?ID=2127), and it's also clear in the script which I uploaded the other day and linked to in an earlier post in this thread. The script is HERE (http://www.ethanwiner.com/aes/), and the exact wording is:

Quote
The following four parameters define everything needed to assess high quality audio reproduction:

Defining what affects audio reproduction has always been the entire point of my four parameters. I go out of my way to explain in forums (again and again and again) that I don't include intentional "euphonic" distortion in the list because that's a creative tool. As is reverb. This is why some people get so upset when I claim that a $25 SoundBlaster card has higher fidelity than the finest analog tape recorder. They immediately see red, and go on about how people prefer the sound of analog tape. And tubes. And hardware having transformers. And all the rest. But subjective preference was never my point or my intent.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-24 17:13:22
Arnold, I have unidirectionally known you for a couple of years. And I always enjoyed how you were able to fight for objectivity even when surrounded by the worst nuts one can think of. And often you have won and it was great. But I'm really somewhat saddened to see what you are doing here. You don't bring the issue forward anymore, but posts large amounts of word play, nit picking, and rhetorical generalizations. That's totally unnecessary and doesn't present you in the light you should be showing in.

This should not be about winning; it should be about learning. You don't learn much if you'd rather argue than listen. I don't learn much reading tit-for-tat posts.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-24 17:24:27
Defining what affects audio reproduction has always been the entire point of my four parameters. I go out of my way to explain in forums (again and again and again) that I don't include intentional "euphonic" distortion in the list because that's a creative tool. As is reverb. This is why some people get so upset when I claim that a $25 SoundBlaster card has higher fidelity than the finest analog tape recorder. They immediately see red, and go on about how people prefer the sound of analog tape. And tubes. And hardware having transformers. And all the rest. But subjective preference was never my point or my intent.

The merit to this approach is that it is objective and quantifiable. The weakness is that it does not take into account how people hear. By the measurements a soundblaster is better than tape. You seem to accept this result. But, by the same measurements, 256-kbit MP3 is about as bad as 12-bit audio. (http://www.jensign.com/RMAA/ZenXtra/Comparison.htm) Do you accept this result?
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-24 17:31:22
Take the time domain signal. Subtract the input from the output. The result must be less than -120dB wrt full signal level.

There you go. No false positives. A few false negatives .


That sounds like a good approach. For every parameter you can begin by asking, what would, for example, my frequency response have to look like to accomplish that goal. It gets a little tricky once you get to the details. A flat FR +/- n dB could do the job. But also a much higher n when the dip is limited to a very tiny band. So what kind of scale would be most applicable? Speaking of bands. The -120 dB would have to be limited to a to be agreed upon bandwidth.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-24 18:04:17
The weakness is that it does not take into account how people hear.

That's a separate issue, and is the reason I invited JJ and Poppy Crum onto my panel. It is knowable at what level artifacts such as noise and distortion can be heard, and how much frequency response error is needed to notice. So my four parameters concept attempts to 1) catalog everything that affects reproduction, and 2) identify the amounts needed for a device to be considered transparent.

Quote
by the same measurements, 256-kbit MP3 is about as bad as 12-bit audio. (http://www.jensign.com/RMAA/ZenXtra/Comparison.htm) Do you accept this result?

I honestly don't know enough about lossy compression to have an opinion. I do know that "static" tests for the four parameters are not applicable to lossy compression because the compression changes how it behaves constantly as the music changes.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-24 19:09:06
This is an ongoing problem when trying to discuss audio with mathemeticians.......
What a tired argument.  Care to substantiate with something other than an anecdote?

As far as I know, DSD converters are not Sigma-Delta. Somebody correct me if I'm wrong........
Other than the fact  that DSD is a marketing term, yes, DSD is based on sigma-delta; you are wrong.

...and yes, there are things in the Audio Myths Workshop video that do not fulfill TOS #8.

Which is why it is so utterly infuriatingly frustrating to attempt to carry on any rational discussion of this topic on this site, and why the discussion is per se biased in Ethan's favor - you allow him to present HIS "illegal" arguments, but the opposition is not allowed to reply in kind. Not fair. No disrespect intended, but I'm tearing my hair out here!
Please quote one single thing on this forum that Ethan has posted *here* that violates TOS #8.

Because some listeners can in fact differentiate some equipment
Really?  Show me!
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-24 19:09:53
I honestly don't know enough about lossy compression to have an opinion. I do know that "static" tests for the four parameters are not applicable to lossy compression because the compression changes how it behaves constantly as the music changes.

--Ethan


Static tests that don't take into account both the short-term signal and error spectra don't mean squat, of course.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-24 20:07:01
Which is why it is so utterly infuriatingly frustrating ... I'm tearing my hair out here!

I think the level of frustration in JE's post is very telling. I honestly don't understand why people get so emotional about this stuff. It's just audio!

I hope the above doesn't violate a TOS. It certainly seems relevant to me.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-24 20:11:14
A definition doesn't become circular just because two sentences contain the same term.

The definition was indeed very precise. Deriving a to be defined term from an already well defined one, as linear equations are, is not circular.


"Linear phenomena are those described by linear equations" is indeed precise. However, as a definition it is useless by itself: whether a description of a phenomenon is linear or not is a property of the description, not the phenomenon.

There are lots of examples. A random one is the Burger's equation, a nonlinear partial differential equation for a simple model of shock waves in viscous flows. Since the equation is nonlinear, according to this definition, the phenomenon is nonlinear. But a simple transformation of the variables gives a linear equation (which can immediately be solved, thus giving the solution to the original equation); so is the phenomenon, occurring somewhere in the world, changing its properties because I decided to transform a variable?

And the opposite is of course trivial to do: if you have some linear equation for some variable p, say K*p=0 (with K some integral, differential or whatever operator not depending on p), I can obviously transform to eg t=exp(p) whence my equation for t is immediately nonlinear!

I realise that this is pedantic, but so is most of the rest of this thread (and dominated by egos, too). So hey!

But it's obvious that everybody is saying the same thing, ie, linear in terms of the input and output signals, and not some arbitrary functions of them. I was merely pointing out, in my humble way, that trying to sound sophisticated can backfire if one doesn't know what one is talking about (consider also the quasinormal distribution mentioned earlier!)

Actually the whole discussion is a series of pissing matches and word games. Fun to watch! Keep it up!
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-24 20:13:29
I honestly don't understand why people get so emotional about this stuff. It's just audio!


Because, right or wrong, they do it for a living and they care?
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-24 20:22:03
Take the time domain signal. Subtract the input from the output. The result must be less than -120dB wrt full signal level.

There you go. No false positives. A few false negatives .


That sounds like a good approach. For every parameter you can begin by asking, what would, for example, my frequency response have to look like to accomplish that goal. It gets a little tricky once you get to the details. A flat FR +/- n dB could do the job. But also a much higher n when the dip is limited to a very tiny band. So what kind of scale would be most applicable? Speaking of bands. The -120 dB would have to be limited to a to be agreed upon bandwidth.


But (and I think that is his point) the interesting part of doing this (ie, starting with a measurement that guarantees transparency and removing things while keeping it transparent) is to determine which new measurements, apart from things like FR etc, will help. Because, as has been mentioned, things like perceptual encoders measure much worse than they sound (even though I understand this intellectually, the first time I subtracted a 256kbit mp3 from the original and listened to the difference my jaw dropped).

Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-24 20:30:46
The weakness is that it does not take into account how people hear.

That's a separate issue, and is the reason I invited JJ and Poppy Crum onto my panel. It is knowable at what level artifacts such as noise and distortion can be heard, and how much frequency response error is needed to notice. So my four parameters concept attempts to 1) catalog everything that affects reproduction, and 2) identify the amounts needed for a device to be considered transparent.

That's a reasonable approach to take but you need to realize the implications of the limitation. Using this approach, it is definitely not fair to say that a soundblaster "sounds better" than analog tape. You can say that it is "more accurate" or that it "measures better" or is more "transparent". You'll have to agree on a definition of "high-fidelity" before you can make any claims around that or chastise others for their claims. You have to be careful with your language, because, as I'm sure you're aware, and despite what they may tell you, accuracy is not what everyone considers to be the most important characteristic in a reproduction system.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-24 20:31:20
I honestly don't understand why people get so emotional about this stuff. It's just audio!


Because, right or wrong, they do it for a living and they care?


Yes, but look at this, that was mentioned earlier (not by you!):
Quote
But errors can creep into the math because computational systems are not perfect and additional errors can be intriduced in the conversion process, which is also not perfect.

So while you may be correct in theory reality may differ. This is an ongoing problem when trying to discuss audio with mathemeticians.......


OK. Now I'm sort of a mathematician. This here basically is a statement that "we" have never actually thought about this kind of thing (hello? my main problem with writing programs to do numerics is that I have to wade through chapter upon chapter of this stuff in any book I open).

So shall I get emotional about this and start nitpicking what a "mathematician" is, what percentage of them have thought about these things, what percentage could have but did not, whether this is "ongoing" or not, then move on to what I think of mixing engineers' opinion of the stuff I do 16 hours a day etc? Because that is the level at which this discussion is being conducted: silly and pointless arguments about things being "linear" or not, word games as to whether or not one can perform n measurements that completely characterise x device, claims that y is instantly distinguishable from z without even concentrating etc etc.

It's a shame because it actually is interesting and nontrivial (and this is the direction Robinson is trying to take it, but I fear it's not going to happen now).
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-24 20:42:56
Actually the whole discussion is a series of pissing matches and word games. Fun to watch! Keep it up!

Uh, no.

EDIT: For those who might not understand what I'm getting at, pissing matches and word games, while maybe fun to watch, are not to be kept up.
Title: AES 2009 Audio Myths Workshop
Post by: Axon on 2010-03-24 20:52:27
Blah. Still haven't read any of this. Trying to catch up in my copious free time.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-24 21:35:52
Using this approach, it is definitely not fair to say that a soundblaster "sounds better" than analog tape.

Of course, and I never say that! My standard comment is along the lines of: A $25 SoundBlaster card has higher fidelity than the finest analog recorder in every way one could possibly assess "fidelity."

And when those on the other side question how "fidelity" is defined, wishing it were how they want it to be, I send them to Wikipedia which explains it very nicely:

Quote
Similarly in electronics, fidelity refers to the correspondence of the output signal to the input signal, rather than sound.


--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-24 22:21:40
I've snipped the deeply embedded quotes - you'll have to read back to get context. Basically Arny claiming that lossy codecs pass all traditional measurements, me saying they don't, but it's a silly argument because Arny hasn't defined what "traditional measurements" he's talking about, then...

I have. Two words: Audio Rightmark.

Audio Rightmark certainly does include measurements that reveal the noise/distortion introduced by mp3 encoders.

See here for mp3 vs wav:

http://www.jensign.com/RMAA/ZenXtra/Comparison.htm (http://www.jensign.com/RMAA/ZenXtra/Comparison.htm)

Scroll down to THD and IMD graphs - quite revealing.


Revaling of what?

I see nothiing that worries me.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 22:56:01
For the most part, high fidelity is currently pretty much all about rooms and tranducers. For recording the tranducers are of course microphones, and for playback the transducers are loudspeakers, headphones, and earphones with the latter of course obviating concerns about rooms.

Through measurement, you do find that the majority of the infidelity is where you say, in traducers and acoustics.


So far so good.

Quote
To say that problems in the electronics are insignificant in comparison is like saying a 3 kHz tone is insignificant in comparison to higher level background noise.


I don't get that at all. The harmonics and IM products that are created by transducers are basically the same as those generated by electronics except that the transducers make far more of them and start creating them at far lower levels.

Quote
We are are very good at hearing past acoustics


So far so good.

Quote
and through transducer imperfections.


My experiences say not at all.

Quote
In many cases these effects/imperfections are euphonic.


My experiences say not at all.

Quote
It is not insane for recording engineers and audio enthusiasts to pay attention to details several orders of magnitude below what you would consider to be the primary imperfections.


I don't get that at all.

Quote
One man's imperfection is another man's character.


I'm not buying any of that, either.

What is true is that 40% nonlinearity is an organ pipe is different than 40% nonlinearity in a woofer because an organ pipe makes only one tone at a time, while a woofer makes multiple tones at the same time. Single tone = no IM. Multiple tones = IM.

HOw nonlinearity in an organ pipe is different from nonlinearity in a guitar amp is demontrated when one plays multiple notes at the same time. In the organ, multiple tones means multiple pipes, one tone per pipe. In an electric guitar, it all goes through the same woofer and reducing nonlinearity in tha woofer is of the essence. Playing just one string at a time is not uncommon on a bass guitar, which has the practical effect of reducing IM. When multiple tones are played on a bass guitar their frequencies are often far enough apart that only one of them is actually in the range of greatest nonlinearity which also reduces IM. In your Hifi, you can't count on any of those things happening, so having a reasonbly linear woofer can be very important.

As far as so-called euphony in tubed hi fi amps goes, it turns out that linear distortion due to interactions between the amps high source impedance and speaker impeadance variations is likely the most obvious audible effect.

Quote
Remember that there's art and science in what we do. Those who have an appreciation of both are going to be the most successful.


I agree with the idea that recording is both art and science, but the room for selling distortion as art goes downhill very fast on the reproduction side.

I record and listen all of the time. I'm constantly changing the linear distortion I add on the record side, but I rarely have the need to change it on the playback side.


Ah, but in the case of a pipe organ you're looking at the wrong thing. (D**n it's a pain to watch so my semantics don't get dinged by tos8....) In the case of a pipe organ the pipes do not function as single units, they are part of an array inside a tone cabinet. When the waveforms of the various pipes are emitted they mix in the air inside the tone cabinet (which in the case of naked pipe arrays would be the room - it's whatever acoustic space the pipes are mounted in) and the summed tones do, in fact, produce IM which is quite audible. So what happens in the speaker of a guitar amp also happens in the air of the organ tone cabinet - which, BTW, is considered (by the designer) to be part of the organ, just as the speaker is part of the guitar amp.

To look at only a single organ pipe would be similar to looking at only a single guitar string - you have to compare the entire systems.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 23:12:15
I'll let others who are more expert than me explain what is "left out" in lossy files. (I'll guess it's frequency response that changes dynamically.) But clearly, a delay of any type will fall under time-based errors.

--Ethan


Well, it's pretty easy to do a null test and listen to the leftover difference products. To my ear it sounds like a good part of the residue is transient information, which would jibe with the subject reports that lossy files tend to lack depth/dimensionality or sound somehow "flatter".
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 23:14:21
The first problem here is that the baseline for evaluating digital processing is not zero noise.


but you said there was zero noise. Which is untrue. Then posted five paragraphs to try to dig yourself out.


I said that there was zero added noise.  I can't believe you're holding me responsible for noise that came in the input terminals.


How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 23:32:08
I agree with that in principle. We can call them categories of faults, or categories of errors.

One of the mistakes that has been made by people who misunderstand Ethan's list is to equlate categories of fault with measurements. The basic misapprehension that these critics have is there need only be one measurement to fully charaterize a given kind of fault, which is not exactly true.

In fact you can measure comon instances of all four kinds of of faults with just one measruement (e.g. multitone).  The reverse is also true - it can take more than one measurement to characterize complex faults.

Quote
Even so, I'm not sure where reverb would fall into this.


If reverb is due to a linear process and it usually is,  then it is a form of linear distortion.  Reverb is usually the result of delaying the signal, possibly filtering it with a linear filter, and then linearlly adding it back to itself. The delay is a special case of phase shift.

Quote
It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.


Reverb does show up in a FR measurment, usualy as some kind of comb filtering effect.

Quote
Far more important IMO is that this list implies an oversimplification that doesn't hold in the real world - just because the effect of any "fault" falls into one of these four categories doesn't mean there are four measurements that can catch any fault.



Quote
Ethan doesn't say this of course - there are two specific measurements listed under the single category of distortion, for example.


This is why I list 2 different kinds of distortion, linear and nonlinear.  I further gave examples of both kinds of distortion.  There are actually two kinds of linear distoriton - amplitude modulation distoriton and frequency modulation distortion. THD and IM measure amplitude modulation nonlinear distortion, while jitter, flutter and wow measure frequency modulation nonlinear distortion.

Quote
My point is this: we generally use measurements tailored to the specific faults we expect to find - "tailored" both in terms of revealing them, and in terms of giving us data in a domain and form that makes some sense, or reveals something useful.


That is more habit and cutom than necessity.  Our ability to analyze signals shot up rapdily when we started doing the analysis with computers.  If you study the more recent literature of audio measruements there have been a number of papers discussing nwere approaches. Papers by Gene Czerwinwski and Richard Cabot come quickly to mind.

Quote
I really wonder if we can define a set of measurements which would catch every possible fault - both now, and in the future.


The answer is generally yes.  Old relics like THD and IM are artifacts of the days when only very simple equipment was available to generate test signals and analyze them.

A great deal can be determined with no specific test signal at all - there is readily available software that analyzes both linear and nonlinear distortion by automatically developing linear and nonlienar models of the system under test. Mathematically, this is called identification. 

Several programs measure linear transfer functions including SMAART.

The essence of Klippel's speaker distortion measurement system is the mathematical process of parameter indentification by comparing observations the operation of the real system and a model with various test signals.  The software just tunes the parametrs of the model until it works like the real system.

Quote
Here's a practical example: put lossyWAV into a stand alone box, including a slight delay which is itself very slightly varying in a random way (i.e. an inaudible amount of flutter). What measurements will characterise that black box properly?


The random delay can be measured by the usual means for measuring FM or phase distortion. I am unfamilir with lossywave.  However I reject this line of argumentation because it is an intellectual game that sheds littls light on the problems we need to solve in the real world. 

Quote
If we can leave the "I must be right / you must be wrong" level of argument at the door, it would be much appreciated.


Well Dr. cure yourself. You played that game a number of times in just this post. You made unfounded assertions.

Quote
This genuinely interests me, and it's more of a challenge than people like to admit - especially when they're arguing with audiofools who want to turn it all into back magic (which it isn't). But let's have a grown up discussion please.


Well then leave the tricks, riddles, and unfounded assertions at the door.


A couple of questions:

1) why are we even discussing reverb, which is an acoustic effect that only occurs electronically as a deliberately simulated special effect? I thought we we discussing parameters of gear measurement, not room acoustics?

2) what is this "nwere" as in the phrase "If you study the more recent literature of audio measruements there have been a number of papers discussing nwere approaches. Papers by Gene Czerwinwski and Richard Cabot come quickly to mind." I was unable to find any references to this combination of letters in any scientific or acoustical context in Google, I've never heard the term before, and frankly the quoted sentences don't make any sense as written. Can we PLEASE (and this is meant with all due respect for everyone) employ our spell checkers when making technical statements because otherwise the discussion become unintelligible?

If "nwere" is a legit term and not a typo can somebody provide a link?
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-24 23:36:39
The word is "newer".  With your suggestion that we check our spelling, can you please more selective in your quoting.  We don't need to re-read Arny's entire post for you to get to your point.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 23:47:13
The bottom line is that simple definitions of accuracy may themselves be inaccurate.

That is an extremely interesting statement. But where does it leave us?
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-24 23:49:06
The bottom line is that simple definitions of accuracy may themselves be inaccurate.

That is an extremely interesting statement. But where does it leave us?


Going around in circles?
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-24 23:50:37
Thus, while the idealized console/DAW itself is "perfect", there is NO perfect idealized model of one of the primary, key components, the A/D converter.
There are several. The big question is: which one? http://en.wikipedia.org/wiki/Analog-to-dig...#ADC_structures (http://en.wikipedia.org/wiki/Analog-to-digital_converter#ADC_structures)
Ok, I read over your Wiki layman's reference but I don't see anything there that would lead one to believe that any existing ADC is, in fact, perfect.
A thing does not have to be perfect to form a perfect idealized model of it. Science is noisy and full of error. However, generally the error behaves according to some model. It is not difficult, for example, to measure the spectrum of the noise floor of a given ADC.


True. But such "perfect, idealized models" although they may be handy for ivory tower types and internet discussions, have no actual relevance in the real world where nothing (and only nothing) is perfect.
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-24 23:53:36
<Yawn>

It is painfully obvious that you do not have the a deep enough level of understanding of this subject to provide meaningful commentary.  Shall I gag you with a sock again?
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-24 23:55:48
My standard comment is along the lines of: A $25 SoundBlaster card has higher fidelity than the finest analog recorder in every way one could possibly assess "fidelity."

And when those on the other side question how "fidelity" is defined, wishing it were how they want it to be, I send them to Wikipedia which explains it very nicely:

Nothing technically wrong with any of this but, in my opinion, you're baiting people with this approach.

Surly you recognize that one way people may want to "assess fidelity" is by how it subjectively sounds to them.

And why not just short circuit the whole trip to Wikipedia (and subsequent arguments about whether Wikipedia is an authoritative source) and use more concise terminology (e.g. "transparency", "accuracy", "correspondence of the output signal to the input signal") from the get go?
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-25 00:00:00
True. But such "perfect, idealized models" although they may be handy for ivory tower types and internet discussions, have no actual relevance in the real world where nothing (and only nothing) is perfect.


Well, I must point out that one can also model noise and inaccuracy.

It's not like math types haven't seen this. I mean, Walter Heisenburg did a splendid job of it, and the Schroedinger Wave Equation is nothing but a model of the complex function of probability, and one that works frighteningly well to the present.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 00:19:08
The first problem here is that the baseline for evaluating digital processing is not zero noise.


but you said there was zero noise. Which is untrue. Then posted five paragraphs to try to dig yourself out.


I said that there was zero added noise.  I can't believe you're holding me responsible for noise that came in the input terminals.


How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?


How much dithering is added once the signal is digizited? I didn't say A/D system, I said digital system. Yes there are types of discretionary processing that when done in the digital domain may require additional randomization of quantization errors, but for many very useful situations such as transmission and storage of data, no dither is added in the digital domain.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 00:21:17
The bottom line is that simple definitions of accuracy may themselves be inaccurate.

That is an extremely interesting statement. But where does it leave us?


In the midst of an exciting, evolving technology.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-25 00:22:58
Not all analog mixers are op-amp based.


Agreed. However some very highly respected mixers (e.g. classic Neve) have made very heavy use of op amps.

IME, aversion to op amps traces back to the little dust-up we had in the 70s about an obsolete concept called "Slew rate distortion".  The goal posts have moved since then, and we now thow and kick very different balls.

Quote
Many of the better ones use discrete circuitry.


Again, that's probably far more style than substance. The lowest distortion op amps around are probably ICs.  In fact purveryers of discrete op amp replacements are not always forthcoming about how their products perform vis-a-vis the best chips.

Quote
In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps.


Again, there's no logical reason for the obsession with class A amplifiers with regard to signal handling. 

We still use discrete op amps for high power levels. Most if not all modern linear (as oppossed to switchmode) power amps are basically just really big op amps.

Quote
Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


Many op amps are class AB which means that they are  class A when driving high impedance loads. Also, connecting a resistor from the output of an op amp to one of the power supply rails will force the op amp to run class A over a wider range of loads and signals.

I attribute the fascination with class A amplfiiers to a linguistic oddity - class A also means "of the highest caliber" in American English.

As an aside, for a few months this fall I was the unintentional owner of an essentially new (NOS) Pass SA4e which is allegedly a class A power amp. I've listened to it and I've had it on my test bench. I compared it to a Behringer A500 which is in many ways a pretty close comparison. I kept the A500 and sold the SA4e.


Actually there's an electronic reason, at least historically. Class AB and B amplifiers frequently exhibited some distortion around the crossover point in the waveform. This could be anywhere from a distinct "crossover notch" at zero crossing found in nearly all Class B designs to some very, very slight non-linearity around zero crossing in modern class AB amps. The thing is that this particular type of distortion is highly audible (being a form of nonharmonic distortion) and is much worse on low level signal than on high level signals, due to the fact that the amplitude of the distortion products are constant regardless of the amplitude of the signal.

Now in modern, well designed Class AB equipment this really isn't a problem any more, at least for most folks, but some people still believe that there's a slight, but audible difference in the distortion products of the different design classes. Running various examples of gear on a high quality distortion analyzer yields different distortion spectra. Whether any of the differences are attributable to this particular design factor? Well, some say yes, some say no, I say use what sounds best to you.

As to your amp, well, based on my experience with the B brand you may regret the decision when it goes up in smoke after the warranty expires. Or not. As an old service tech I don't like their build quality much. But that's just my opinion.

BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.

Other than that, I'll keep my opinions to myself on this one....
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-25 00:37:54


Ethan, I am "glad of heart" to see you pulling back your statements into more tightly scoped contexts, where they belong.  Thank you!

When we are SPECIFIC that we are talking about (primarily home) reproduction systems, not PRODUCTION systems, then I feel rather an order of magnitude more comfortable with your statements.  I think it is a great sign of humility and honesty that you have done this, and I applaud you!

Arny, I have a new opinion of you as well...I originally found you, speaking frankly, to be a pandering, equivocating sophist.  But I realize that is an entirely uninformed and ignorant position, which I resoundingly refute and disclaim.  You're the kind of guy that goes into a sword fight with a smile on your face, and an eye to who is to buy the first round of pints.  Nevermind that your opponents often bear marks and bruises below the belt, you're a scrappy one, and you KNOW how to turn the language in your favor.

I am personally a pedant, language-wise, so this skill doesn't go unnoticed.  I applaud your ability, even when I disagree with your position.

Now, my friend Greynol...you I have no use for.  You're drunk on moderating, and are a heavy-handed, imaginationless troll accusing others of membership in your clan.  If you were in my forum, you'd lose your first-lieutenant stripes right quick.  You violate your own terms of service merely by showing up in a thread, and on the internet, hypocrites are seldom rewarded.

This is not a troll post and I am not baiting.  Thank you Ethan for moderating your position.  I didn't expect you would, but I"m happily pleased that you have.

dwoz
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-25 00:42:35
I'm not sure where reverb would fall into this. It might show up on a frequency response measurement. We could argue about frequency response vs impulse response - but that's a pointless argument.

I don't consider reverb effects in my four parameters because, at heart, reverb is an "external effect" that happens acoustically in enclosed spaces. Yes, it can be emulated by hardware and software devices, so you can still assess frequency response and distortion.
I was thinking about the effect you sometimes get with valve amplifiers, where the metal in the valves sings along with the music. Replace the speakers with an 8 ohm resistor, crank up the volume, put your ears (not too) close to the valves, and you can hear it. Also, if you use the amplifier normally, and tap the valves, you can hear the tapping through the speaker. These two effects together suggest to me that, under certain circumstances, some valve amps will act as little reverb chambers. I've no idea if it's audible. I suspect it could be caught by a frequency response plot, but might just fall within what most people would judge to be an "acceptable deviation" - while in practice it might not be "acceptable" because the "deviation" occurs so long after the original sound.


The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes (The structural elements inside the tube are loose and able to vibrate). Many brand new tubes exhibit this defect, in some models and brands of tube as many as 80-90% of tubes of the production line will  be microphonic to some degree. As the whether this presents a problem depends on the application - many tubes that function perfectly well in the driver stage of a hi-fi power amp will be totally unusable in the front end of a high gain guitar amp such as a Mesa Boogie. This is why tubes should always be hand selected in the equipment they will be used in for audio applications. It should also be noted that thisw defect is undetectable on any type of conventional tube tester - a screamingly microphonic tube will still test good. (I am not considering a cranked Mesa Boogie to be a conventional tube tester in this case). It should also be noted that some of the most highly respected tube brands in terms of sound quality also have some of the highest rates of microphonic tubes, which is why not many people run Telefunken tubes in their high gain guitar amps.

What I'm getting at is that the phenomenon you're describing is caused by defective parts and if your tubes produce ringing in the speakers when you tap on them they should be replaced with ones that don't, or at least wrapped with rubber bands to damp out the ringing.

And yes, the effects of microphonic tubes are definitely audible - really bad ones will actually cause acoustic feedback.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 00:45:35
Actually there's an electronic reason, at least historically. Class AB and B amplifiers frequently exhibited some distortion around the crossover point in the waveform. This could be anywhere from a distinct "crossover notch" at zero crossing found in nearly all Class B designs to some very, very slight non-linearity around zero crossing in modern class AB amps. The thing is that this particular type of distortion is highly audible (being a form of nonharmonic distortion) and is much worse on low level signal than on high level signals, due to the fact that the amplitude of the distortion products are constant regardless of the amplitude of the signal.


(1) Crossover notches are not necessary. Very many class AB power amps have been built that lacked them. Audible crossover distortion was pretty well solved by all competent amp designers by the mid-late 1960s.

Quote
Now in modern, well designed Class AB equipment this really isn't a problem any more, at least for most folks, but some people still believe that there's a slight, but audible difference in the distortion products of the different design classes.


Some people believe in all sorts of imaginary things. So what?

Quote
Running various examples of gear on a high quality distortion analyzer yields different distortion spectra.


If the nonlinear distortion products each say 100 or more dB down, their weighting isn't a problem for sure.  If they are all less than 40 dB down, then weighting can matter. Please see:

"Auditory Perception of Nonlinear Distortion" Authors: Geddes, Earl R.; Lee, Lidia W.
AES Convention:115 (October 2003) Paper Number:5891

Quote
Whether any of the differences are attributable to this particular design factor? Well, some say yes, some say no, I say use what sounds best to you.


Of course what sounds best to you will be determined in accordance with TOS 8, right.

Quote
As to your amp, well, based on my experience with the B brand you may regret the decision when it goes up in smoke after the warranty expires.


The warranty expired over two years back, and it still keeps pumping out clean power. It's good fly paper for catching snobs. ;-)

Quote
BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.


That would probably be a TOS 8 violation.

Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-25 00:46:51
True. But such "perfect, idealized models" although they may be handy for ivory tower types and internet discussions, have no actual relevance in the real world where nothing (and only nothing) is perfect.


Well, I must point out that one can also model noise and inaccuracy.

It's not like math types haven't seen this. I mean, Walter Heisenburg did a splendid job of it, and the Schroedinger Wave Equation is nothing but a model of the complex function of probability, and one that works frighteningly well to the present.


More to the point, there are tens of books on how one would go about modelling noisy phenomena, processing analog and digital signals etc.

However, it seems you're missing the point. It's this: if anybody turns up and actually knows what they are talking about on some technical issue that our guest experts have been recently lecturing on (nonlinear sound waves in pipe organs, numerical analysis etc), well, most likely that someone doesn't mix tracks for a living. Thus, he/she has no real practical experience, and it is not worth the expert's time to attempt to educate them (and never mind the fact that we're just discussing electronics at the end of the day).

Incidentally, it was only after I started reading online forums that I understood exactly what Douglas Adams' point about the spaceship full of "hairdressers, account executives, film makers, security guards, telephone sanitisers, and the like" (from Wikipedia: I don't remember the whole list!) was.
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-25 01:56:06
Now, my friend Greynol...you I have no use for.  You're drunk on moderating, and are a heavy-handed, imaginationless troll accusing others of membership in your clan.  If you were in my forum, you'd lose your first-lieutenant stripes right quick.  You violate your own terms of service merely by showing up in a thread, and on the internet, hypocrites are seldom rewarded.

Is someone upset that his analogy about lossy encoding got binned because it didn't leave any room for transparency?  Or is it that someone is mad that I won't let posts stray off-topic or allow people to wave their hands because they cannot adequately discuss the technical details that are presented on their merits?

This is not a troll post and I am not baiting.

Did you somehow expect this "troll" to ignore your post or something?  As one of your members who lost his ability to post here suggested, I'm not all that great at ignoring things.

Coming from the guy who claims the copyright on this (http://thewombforums.com/images/heading_new/area2.jpg) no less.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-25 02:14:44
Defining what affects audio reproduction has always been the entire point of my four parameters. I go out of my way to explain in forums (again and again and again) that I don't include intentional "euphonic" distortion in the list because that's a creative tool. As is reverb. This is why some people get so upset when I claim that a $25 SoundBlaster card has higher fidelity than the finest analog tape recorder. They immediately see red, and go on about how people prefer the sound of analog tape. And tubes. And hardware having transformers. And all the rest. But subjective preference was never my point or my intent.

--Ethan


Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 02:29:34
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


Luckily very many people out there prefer acting on their own authority over that of a price tag. There are over 20000 hits (http://www.google.de/search?q=rmaa+soundblaster) that might hint that $99975 just don't make that much of a difference, when you don't know how to spend it wisely.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-25 03:39:10
The bottom line is that simple definitions of accuracy may themselves be inaccurate.

That is an extremely interesting statement. But where does it leave us?


Going around in circles?



In the midst of an exciting, evolving technology.


Two very good answers - which brings us to my point.

It is my contention that the state of the art of audio measurement and the state of the science of human audio perception at tis time are not accurate enough really adequately quantify what we're discussing.

I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well. The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.

In terms of audio perception some aspects are fairly well understood, but other aspects - specifically how the brain handles perceptual information and how the perceptual systems encode information for transmission from the primary sensory organs to the brain are currently the subject of some very interesting research.

I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)

Science is supposed to be based on observation. We observe the world around us. We study our observations. We construct hypotheses explaining our observations and compare them to the behavior of the world; when they appear to fit they become standard (more or less) theory. We conduct tests of the theory and, as our technology progresses enough to provide sufficiently accurate tests, we prove the theory and it becomes law, or, unproven, it remains theory until a better explanation comes along.

We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification. The Catholic Church tried that with Galileo.

A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.

(Please note that I am not endorsing ghosties, faeries, wizards, or  expensive sculptures that make your dentist's stereo sound better......)
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-25 03:55:23
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


Luckily very many people out there prefer acting on their own authority over that of a price tag. There are over 20000 hits (http://www.google.de/search?q=rmaa+soundblaster) that might hint that $99975 just don't make that much of a difference, when you don't know how to spend it wisely.


20,000 hits? Are you saying there have been 20,000 hit records mixed on a Soundblaster? 
Or that there have been 20,000 hits on the website publishing the spec? 

Actually I only paid 5 grand for my Studer, but it did cost 100 grand new.

What's interesting about those specs is what they DON'T say. There is no spec for HD except at 1K. There are no distortion specs at all for any kind of distortion taken at lower levels than -3dBfs. There are other problems, but that's enough for a start.

I will say that those specs do look very good on paper at first glance.
Title: AES 2009 Audio Myths Workshop
Post by: shakey_snake on 2010-03-25 04:48:07
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)
 
Why would you expect ABX testing to give analogous results to situations that don't control any number of a multitude of factors?
Title: AES 2009 Audio Myths Workshop
Post by: Light-Fire on 2010-03-25 05:43:05
...When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit)

Those sound "engineers" are not real engineers. They are not likely to have engineering training, otherwise they would trust ABX instead of their gut feelings.

...that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations...

That is the same as saying the world is flat.

Title: AES 2009 Audio Myths Workshop
Post by: stephanV on 2010-03-25 07:55:15
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)

When they make such claims without an ABX test to me it would seem the "engineers" are talking out of their arse. If the difference is so obvious it would be no problem to do it double blinded no? The limitations of an ABX seems to just be that it doesn't give "engineers"  and their following their special feeling of being better than the rest.

Quote
We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification.

We do this all this time, it's called placebo affect. Not all observations are equal.

Quote
The Catholic Church tried that with Galileo.

That's funny from someone who falsely started to preach about the limitations about ABX. Make no mistake about it, you are the church here.

Quote
A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.

I think you have shown here you don't know the first thing about scientific method.
Title: AES 2009 Audio Myths Workshop
Post by: hellokeith on 2010-03-25 08:00:55
Ethan, I am "glad of heart" to see you pulling back your statements into more tightly scoped contexts, where they belong.  Thank you!

Thank you Ethan for moderating your position.  I didn't expect you would, but I"m happily pleased that you have.


dwoz, thank you for complimenting Ethan on something he did not do (nor need to do).  <-- See how a not-so-cleverly-disquised compliment works?


JJ,

Reading over David's posts, I just thought of something.  I have been in the small backroom of a music store testing out a speaker system where at 2 on the amp I could get hearing damage.  But when using that system outdoors as the PA for a band, 10 on the amp was no where near loud enough.. simply not enough amp + speaker power for outdoors.  However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?
Title: AES 2009 Audio Myths Workshop
Post by: Kees de Visser on 2010-03-25 08:18:43
Quote
BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.
That would probably be a TOS 8 violation.
Perhaps not if the measured difference signal >-120dBFS ?
Title: AES 2009 Audio Myths Workshop
Post by: Kees de Visser on 2010-03-25 09:15:26
However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?
In the end it's the SPL at the ear that could make the difference, not necessarily the power of the PA system. You can also go pretty loud with headphones.
I expect more effect from the room acoustics (reverberation) on the masking properties of the codec. It should be possible to simulate that at home with a reverberation (e.g. convolution) plugin.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 09:17:25
I've snipped the deeply embedded quotes - you'll have to read back to get context. Basically Arny claiming that lossy codecs pass all traditional measurements, me saying they don't, but it's a silly argument because Arny hasn't defined what "traditional measurements" he's talking about, then...

I have. Two words: Audio Rightmark.

Audio Rightmark certainly does include measurements that reveal the noise/distortion introduced by mp3 encoders.

See here for mp3 vs wav:

http://www.jensign.com/RMAA/ZenXtra/Comparison.htm (http://www.jensign.com/RMAA/ZenXtra/Comparison.htm)

Scroll down to THD and IMD graphs - quite revealing.


Revaling of what?

I see nothiing that worries me.

You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.

Now, I know that junk 50dB down can be masked, but that's not the point. In your statement, either you are defining a new threshold - i.e. that we can safely ignore anything that's more than 50dB down - or what you're saying is "based on my knowledge of psychoacoustics, the junk I see 50dB down is probably inaudible". So you're not relying solely on Rightmark - you're relying on Rightmark plus the new patent Arnold Krueger psychoacoustic model.

tl;dr - your claim doesn't add up.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 09:22:30
How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?
How much dithering is added once the signal is digizited?
1-bit RMS after most signal transforms - I'm sure you don't need to ask this.

Quote
I didn't say A/D system, I said digital system. Yes there are types of discretionary processing that when done in the digital domain may require additional randomization of quantization errors, but for many very useful situations such as transmission and storage of data, no dither is added in the digital domain.
Transmission and storage? No digital volume control then. Or EQ. Or even mixing.

Really - you make some very silly arguments. All to avoid admitting your were wrong.

Though sometimes I feel sure you must know you're posting something wrong, or at least something technically correct but intentionally misleading or incomplete, and do it anyway - just to have a nice argument? I don't know - but it's not helpful.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 09:25:43
...snip stuff about valves/tubes...

The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes...

Thanks John.

My point was that these measurements or characteristics are supposed to be defining audio equipment - or at least defining a threshold beyond which we can be sure it's transparent.

For the purposes of this discussion, I don't care whether it's because the tubes are broken, or they all do that - I care about whether the defined measurements catch this fault.

Since no one has been brave enough to properly define the measurements yet, we can't be sure.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 09:42:07
the metal in the valves sings along with the music ... tap the valves, you can hear the tapping through the speaker.

I suppose you could call that reverb, but I call it ringing because it has a single dominant frequency. Either way, it's one very good reason to avoid tubes in all audio gear.

Quote
I hope you don't think I'm being too harsh, but this renders the whole exercise a bit meaningless for me. It's turning from "this characterises any audio component" to "this characterises any audio component, except the ones it doesn't". There's a problem: who is to decide which ones it doesn't characterise?

It's not meaningless IMO. The whole point of the "four parameters" is to define what affects audio reproduction. This word is in the title of my AES Workshop (http://www.aes.org/events/127/workshops/session.cfm?ID=2127), and it's also clear in the script which I uploaded the other day and linked to in an earlier post in this thread. The script is HERE (http://www.ethanwiner.com/aes/), and the exact wording is:

Quote
The following four parameters define everything needed to assess high quality audio reproduction:

Defining what affects audio reproduction has always been the entire point of my four parameters. I go out of my way to explain in forums (again and again and again) that I don't include intentional "euphonic" distortion in the list because that's a creative tool. As is reverb.

This is all fine and good - I have no problem with this (I think maybe other people do).

You are looking at equipment which aims to be transparent.

Equipment which aims to change the signal is outside the scope of the discussion.

Fine.

My point is really simple:

For your measurements (and the associated pass/fail thresholds) to be believable and useful, they need to be able to raise a red flag which says "this is not transparent" if something isn't transparent. In this context, we should be able to measure anything - and always get that red flag if its appropriate.

If that's not the case, your measurements don't define transparency in the way that you claim.

If there's a class of audio component - and I mean anything - which your measurements say is "transparent", but can be ABX'd, then your measurement suit is incomplete and/or you've got the wrong measurements. IMO!



Quote
This is why some people get so upset when I claim that a $25 SoundBlaster card has higher fidelity than the finest analog tape recorder. They immediately see red, and go on about how people prefer the sound of analog tape. And tubes. And hardware having transformers. And all the rest. But subjective preference was never my point or my intent.

Even without the possible subjective preference for distortion, I think it's a harder problem to take two sets of measurements, and say definitely "X sounds better than Y" - unless you have the simple case where X has identical faults of Y, but of half the magnitude (for example). In the general case, where you have different measurements, it's a multi-dimensional problem, and predicting whether a certain amount of fault type A is more objectionable to human ears than a different amount of unrelated fault type B is a really hard problem.*

So let's not even try to solve that "which is better" problem just yet - let's take baby steps first: define a test which can be applied to any piece of audio equipment, whereby if it passes that test, it's transparent. If it fails the test, it may be non-transparent.**

I initially thought that's what you were doing - I now realise it's not. But I think it would be a great thing to do.

Cheers,
David.

P.S.

* - Put psychoacoustics into the equipment you're measuring, and it simply becomes a battle of which has the more human-like model: the equipment under test, or the measuring equipment. To get a trustworthy judgement of which is better, you have to resort to human ears.
** - I think (Arny disagrees) that anything psychoacoustic based will always be judged as "non transparent" by this hypothetical set of measurements - but I think that's OK for now - better to err on the side of caution for now, than to start with something that can make both false positives and false negatives.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 10:31:25
You are looking at equipment which aims to be transparent.

Equipment which aims to change the signal is outside the scope of the discussion.

Fine.

My point is really simple:

For your measurements (and the associated pass/fail thresholds) to be believable and useful, they need to be able to raise a red flag which says "this is not transparent" if something isn't transparent. In this context, we should be able to measure anything - and always get that red flag if its appropriate.

If that's not the case, your measurements don't define transparency in the way that you claim.


The key phrase is "in the way that you claim".  Ethan's claims were made in the context of a discussion of conventional audio production components - mic preamps, audio interfaces, etc. Taking claims outisde of their stated domain is the *first* on Schoepenhauers list of 38-odd ways to win arguments by deceptive means. 38 ways to  win arguments by cheating. This list is over 100 years old! (http://www.indiauncut.com/iublog/article/38-ways-to-win-an-argument-arthur-schopenhauer/)

If you consider the recent results of Rightmark testing of MP3 cders,you will see how it works in that way.

While I have no problems with the results after additional analysis, there were a few red flags contained therein. There were more than a few things that screamed "Not your father's McIntosh amplifier". ;-)

I gave the test results a "Perceptual coder provisional".

Now some may have not seen the red flags, but then most of you never tested your father's McIntosh amplifier!

The caveat which the knowlegable people who post here know is that the only reliable way to test coders to this day is reliable subjective testing. 

If you throw the Rightmark tests out for that reason, then you are missing the point. *Every* conventional component in a record/play chain is a proper target for Rightmark testing. Some want to say, that Rightmark is no good because they can't figure out whether a component is conventional or not. I say that if you can't tell a conventional component (e.g. power amp or mic preamp) from an unconventional component (coder or decoder for perceptual data stream) then you don't belong in the conversation. If you want to take your ball and bat and go home, then do so. Otherwise, Man up!

As far as pass/fail conditions go, it appears that the initial interpretation of the Rightmark results for the coders was "fail".  This seems appropriate. Had it been a McIntosh amplifier or a computer audio interface, it would have been a fail. And the initial domain of Ethan's presentation was conventional audio components.

I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)


Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 10:40:29
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
No, the "no-brainer" test will, if passed, guarantee that something is transparent.

The RMAA stuff is a very good starting point.

Now, is anyone going to actually write the tests out properly?

i.e. For each one, list the test stimulus, method of analysis, and pass/fail criteria.

I assume they're all there in the program itself, but they're not in the manual - are they somewhere which could be easily copied/pasted into this discussion?

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 10:50:07
How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?
How much dithering is added once the signal is digizited?
1-bit RMS after most signal transforms - I'm sure you don't need to ask this.

Quote
I didn't say A/D system, I said digital system. Yes there are types of discretionary processing that when done in the digital domain may require additional randomization of quantization errors, but for many very useful situations such as transmission and storage of data, no dither is added in the digital domain.


Transmission and storage? No digital volume control then. Or EQ. Or even mixing.


So what?

You can make digital eq and mixing as good as you want by using longer data words. You could never do that with analog.

Furthermore, digital eq and mixing were not practical and generally available until from one to two decades or two *after* there was a CD player on just every block in the US. We did digital audio on a day-to-day basis and enjoyed great advantages because of it for decades, without either dgital mixers or eq. We still did those things in the analog domain and were intensely content.

So you may want to make totems out of digital eq and consoles, but to me they are just frosting on the digital cake. The important sound quality advantages came out of digital transmission and storage. The irony is that advances in digital transmission and storage make percptual coding almost moot.

Quote
Really - you make some very silly arguments. All to avoid admitting your were wrong.


You are claiming to be able to read my mind.

In the face of a person who is as skilled at reliable mind reading as you seem to want to pretend to be, I wonder why I am so stupid as to even post on HA just once!

The real truth is that I'm biting my tongue red and bloody in the face of some incredibly obtuse talk.

There's an old saying that to a sufficiently uneducated mind, modern technology appears to be magic. A corolary seems to be that to  a sufficiently uneducated mind, modern technology appears to be stupid.



Quote
Though sometimes I feel sure you must know you're posting something wrong, or at least something technically correct but intentionally misleading or incomplete, and do it anyway - just to have a nice argument? I don't know - but it's not helpful.


This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 10:54:56
Well, it's pretty easy to do a null test and listen to the leftover difference products. To my ear it sounds like a good part of the residue is transient information, which would jibe with the subject reports that lossy files tend to lack depth/dimensionality or sound somehow "flatter".


Looks to me like a potential TOS 8 infraction.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 11:03:31
Ah, but in the case of a pipe organ you're looking at the wrong thing.


Says who?

I was up in some pipe organ chambers last night, and what I saw belied just about everthing that you say.


Quote
In the case of a pipe organ the pipes do not function as single units, they are part of an array inside a tone cabinet.


Incorrect. Some pipes are in the open, and some are in cabinets. Furthermore, the cabinets are generally at least partially open in actual use.

Furthtermore, my discusison was of bass tones and subwoofers, and the corresponding pipes in a pipe organ are always out in the open.

Finally, the purpose of the cabinets is to be a sort of acoustic EFX box, IOW they are there to intentionally and discretionarily distort the sound. That puts your discussion of them in the same category as someone who complains about the poor frequency response of tone controls when placed off-center.


Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 11:12:55
The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes


This is incorrect. Virtually all tubes such as those commonly used in legacy audio and for EFX, are far, far more microphonic than their SS equivalents.  Tubes need not be defective tubes to be microphonic. There's a reason why shock mounts have been commonly used with tubes in critical applications all along.

Take just about any piece of tubed audio equipment,  subject it to some vibration and a good FFT analyzer will show measurable amounts of both AM and FM distortion. A corresponding piece of SS gear will perform several orders of magnitude better.


Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 11:29:16
This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!


Sorry, I tried to be polite, but this is getting pathetic. Either you are just getting stubborn on your old days or simply lacking even basic reading comprehension or logic skills. Maybe it is true that one becomes what he fights after just enough time.

2bdecided has made it very clear, that since it is hard to define test suite with fine-grained perceptual thresholds, it could make sense to define a set of safe thresholds. That would possibly include a hefty safety margin compared to common listening environments, but, for example, even -120 dB is realizable with commodity technology today. The benefit of a suite like this would be the ability to declare a component transparent once and for all. As long as nobody would be able to proof that the given test suite was inappropriate to declare 100% transparency, the transparency claim could be uphold for a whole class of components without having to battle over each one separately. Of course, before you try to play word games again, such a test suite would not prove that a component which doesn't meet its criteria is not transparent. But that wasn't the point of the proposal.
Title: AES 2009 Audio Myths Workshop
Post by: Kees de Visser on 2010-03-25 11:42:07
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
That's me! And, to make things worse, I don't even have golden ears.
So please help us define a set of objective, repeatable measurements to verify transparency.
(NB: this is not to say that devices that fail the test can't be transparent.)
Title: AES 2009 Audio Myths Workshop
Post by: usernaim on 2010-03-25 12:45:26
It seems to me that for Ethan's claims to be valid, the device under test must be a black box with respect to the test.  Hence, problems in perceptual coding boxes must be captured if we are to accept his claims.

The irony for this discussion is that given the design aims and criteria for coders, for any given level of measurement shortcoming (like the IMD of the MP3 linked above), it should be the LEAST perceptually noticeable effect one could achieve for that given result.  I.e. the .2% IMD in the example should be the least bad .2% IMD as judged by a wide population, if the coders are finding success.

Of course the codecs are multidimensional so actually my logic only applies to the whole suite of measurements, not a single one taken on its own.

Now that said, if the reproduction system used in the listening tests (and surely this is true for public tests) are deficient in some aspect, then that will mask the audibility of errors in that aspect.  For instance, the linked results show what is visually pretty massive smearing of a 1 kHz transient.  If the loudspeakers used for the test compress dynamics (and almost all do), then the perceptual test tells us more about the limitations of the loudspeakers than the limitations of human perception. 

Very anecdotally, I have had some success in blind testing of codecs by focusing on transients and how "free" they sound to me.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 12:54:56
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


I see a number of false presumptions above:

(1) The false presumption that published specs somehow always limit the performance of a piece of equipment to be worse than the specified performance under all other conditions.

(2) The  false presumption that price performance is always the same, regardless of technological developments. If something costs more, it *has* to be better.

(3)  The  false presumption that specifications that are more tightly specified are always better than specifications that are loosely specified or not publicly specified.

(4) That  false presumption that  performance which is specified from 20-20 KHz is always better than performance that is specified over slightly narrower ranges.

The reality is that the core technology of a $25 SB card is sigma-delta converters, whose performance is inherently incredibly tightly controlled. They operate so almost totally in the digital domain that even their noise floors are tightly controlled. Typicaly, they either beat spec or don't work at all. 

In contrast an analog recorder's performance is highly dependent on routine maintenance, media and several large random variables. Its preformance otten changes measurably and even audibly while it is running.  While it is predictable that its performance will change during a recording session, it is not predictable the degree to which it will change or in which direction it will change. No analog recorder has been found to be sonically transparent in a sensitive ABX test.

Furthermore, due to wavelength effects, the response of an analog recorder below 80 Hz is rarely anything like flat. The operative phrase is "Head bumps".  In contrast, other than a low end roll-off due to coupling capacitors in the analog domain, the LF performance of a sigma-delta converter is nearly perfect. Finding that a sigma-delta converter is sonically transparent at reasonable quality levels, bit depths and sample rates is commonplace.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 13:04:09
This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!



2bdecided has made it very clear, that since it is hard to define test suite with fine-grained perceptual thresholds,


The means by which this may have been accomplished for certain naive readers is that the size of the grains was made infinitesimal, and the suite was required to work for everything from a subatomic IC to an audio system the complexity of a Boulder Dam-sized collection of the most minaturized microelectronics designed by Lex Luther. ;-)

If you define an impossible problem, I can pretty well guarantee that it won't be solved this week.

IMO, everybody who can't tell the diference between a mic preamp and a perceptual coder needs to first learn how to do that.

Some people around here would seem to need to read up on Schoepenhauer's 38 strategems, so that they can at least up the complexity of their pointless rhetoric! ;-)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 13:12:24
You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.


The obvious flaw in the statement above is that saying that we can safely ignore anything that's more than 100dB does not necessarily preclude saying that in many or at least some cases we can safely ignore things that are as little as 50 dB down.

That doesn't even require masking, often applying Fletcher Munson is enough.

If I could just get some people to use good logic and rhetoric!

If I could just find a few good people who knew what a masking curve or an audibility curve was and how to apply it to a technical test report!

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 13:14:59
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
No, the "no-brainer" test will, if passed, guarantee that something is transparent.

The RMAA stuff is a very good starting point.

Now, is anyone going to actually write the tests out properly?



You're asking for a lot of free consulting at a fairly high level of competence.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-25 13:33:22
You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.


The obvious flaw in the statement above is that saying that we can safely ignore anything that's more than 100dB does not necessarily preclude saying that in many or at least some cases we can safely ignore things that are as little as 50 dB down.


It's fairly obvious that this is precisely his point and has been from the beginning.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-25 13:36:03
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
No, the "no-brainer" test will, if passed, guarantee that something is transparent.

The RMAA stuff is a very good starting point.

Now, is anyone going to actually write the tests out properly?



You're asking for a lot of free consulting at a fairly high level of competence.


So what is the point of claiming A then, when asked to demonstrate it, state that you will not? Just to add noise to an already practically random "discussion"? (yes I know "define A", "show me where I claimed it" etc).
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-25 13:50:45
The key phrase is "in the way that you claim".  Ethan's claims were made in the context of a discussion of conventional audio production components - mic preamps, audio interfaces, etc. Taking claims outisde of their stated domain is the *first* on Schoepenhauers list of 38-odd ways to win arguments by deceptive means. 38 ways to  win arguments by cheating. This list is over 100 years old! (http://www.indiauncut.com/iublog/article/38-ways-to-win-an-argument-arthur-schopenhauer/)

Looks like we've found the blueprints for this thread!
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 13:53:23
You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.


The obvious flaw in the statement above is that saying that we can safely ignore anything that's more than 100dB does not necessarily preclude saying that in many or at least some cases we can safely ignore things that are as little as 50 dB down.


It's fairly obvious that this is precisely his point and has been from the beginning.


Then he's been promoting erroenous thinking all that time.

Too bad. :-(

Waste of time, bandwidth.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 14:05:23
The key phrase is "in the way that you claim".  Ethan's claims were made in the context of a discussion of conventional audio production components - mic preamps, audio interfaces, etc. Taking claims outisde of their stated domain is the *first* on Schoepenhauers list of 38-odd ways to win arguments by deceptive means. 38 ways to  win arguments by cheating. This list is over 100 years old! (http://www.indiauncut.com/iublog/article/38-ways-to-win-an-argument-arthur-schopenhauer/)

Looks like we've found the blueprints for this thread!


I guess that not everybody found out about  Schoepenhauers list of 38-odd ways to win arguments by deceptive means as many deacdes back. Some of us did.

Now this place is *really* going to go down hill?  ;-)

BTW the recent history of this for me was that maybe a decade ago, JA mentioned a dinner at which someone suggested that I was a robot that had been programmed with Schoepenhauer's list of 38 strategems.  The context was that many golden ears are surprised by how the Pro-ABX  people tend to argue the same points. They call it "programming", we call it orthodox technology.

I first learned of the 38 Strategems when I was an undergraduate engineering student, umm about 1966.  For the record, I try to avoid them.

Another good source of ways not to argue is any good book about rhetoric. In the first few chapters they usually talk about common fallacious arguments.

If people are intentionally using fallacious arguments then that means I already won, right? ;-)
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 14:06:50
For the record, I try to avoid them.

 
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-25 14:15:44
I guess that not everybody found out about  Schoepenhauers list of 38-odd ways to win arguments by deceptive means as many deacdes back. Some of us did.


I think that list clearly demonstrates the advantages of modern technology: Schopenhauer worked out, by dint of hard work and thought, a 38-point list of stratagems so long ago. Nowadays, with internet connectivity, one need only peruse a mailing list or online forum for a few moments to directly observe all 38 constantly in action.

Imagine what Schopenhauer would have done with that ability...
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 14:23:28
Imagine what Schopenhauer would have done with that ability...


He may have refrained from marrying a poodle.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 14:44:55
@Arnold,

usernaim has written three vital words that I didn't think to, but they are the crux of it: black box testing. That's what we need. If you mistake this for Schoepenhauer's 38 strategems, I really think we're lost.

As for "If I could just find a few good people who knew what a masking curve or an audibility curve was and how to apply it to a technical test report!" - I created an auditory model for the assessment of coded audio for my PhD - yet there are several people here I'm not worthy to wash the feet of  - so I don't think we're lacking in this respect.

"You're asking for a lot of free consulting at a fairly high level of competence." - which is strange, because I thought nailing transparency in audio components was your life's work - certainly the reason for posting so much on the net.

It's not going to benefit me. I don't work in audio any more. It's just an interest for me now.

I suspect (and I'm saying this out of sadness, rather than to bate you) that it falls into the category of "too hard", so you're not willing to attempt it.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 15:31:30
in my opinion, you're baiting people with this approach.

Probably, but it's the truth.

Quote
Surly you recognize that one way people may want to "assess fidelity" is by how it subjectively sounds to them.

But that's not what fidelity is!

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 15:35:03
Ethan, I am "glad of heart" to see you pulling back your statements into more tightly scoped contexts, where they belong.  Thank you! ... I think it is a great sign of humility and honesty that you have done this, and I applaud you! ... Thank you Ethan for moderating your position.

I don't see where I've changed either my position or how I present my position. Maybe you're reading now with a less cynical eye? Or just reading what I write more carefully?

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 15:51:38
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?

So you haven't watched my AES Audio Myths Workshop (http://www.youtube.com/watch?v=BYTlN6wjcvQ) video either? At 41:28 it shows specs side by side for a typical consumer sound card versus a Studer A-810 recorder. Guess which wins in every single category? Now, you could argue that the sound card specs are not very complete, and you'd be right. But what part of 109 dB s/n versus 74 dB (best case for the Studer) is confusing? My recent simple test with sine waves also shows a lot of info all at once in just a few FFT graphs, repeated here below for your convenience. The top graph shows the noise for one record/play generation, and the lower series of graphs shows distortion and noise for the original test tones and two sound cards.

--Ethan

(http://www.ethanwiner.com/misc-content/sound_card_sb_noise.gif)

(http://www.ethanwiner.com/misc-content/sound_card_distortion_corrected.gif)
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 15:54:27
I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well. The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.

Who is this "we" that you speak of?

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 16:08:42
For your measurements (and the associated pass/fail thresholds) to be believable and useful, they need to be able to raise a red flag which says "this is not transparent" if something isn't transparent. In this context, we should be able to measure anything - and always get that red flag if its appropriate. If that's not the case, your measurements don't define transparency in the way that you claim.

To be clear, I'm not expert with what is audible at what relative levels in the way JJ is an expert. I have a pretty good handle on it! And I think the various demos in my AES Myths video show clearly at what level artifacts are soft enough to not be considered a deal-breaker. But this is why I sometimes hedge when asked at what absolute level distortion and other artifacts are no longer audible. To me, unofficially, once stuff is 60 dB below the music it's not a big problem even if it can be heard. To me, once stuff is 80 dB down it can't be heard at all regardless of masking. So to be safe I often say 100 dB down is enough to be totally transparent. Again, JJ can state specific levels with far more authority than I can! (All of the preceding addresses noise and artifacts only, not frequency response which I "hedge" to 0.1 dB from 20 Hz through 20 KHz just to be safe).

Quote
If there's a class of audio component - and I mean anything - which your measurements say is "transparent", but can be ABX'd, then your measurement suit is incomplete and/or you've got the wrong measurements. IMO!

Sure. This is why I ask repeatedly from those who argue against me to show some examples of their own. Like any good scientist, I'm glad to change my opinion when presented compelling evidence.

Quote
Even without the possible subjective preference for distortion, I think it's a harder problem to take two sets of measurements, and say definitely "X sounds better than Y"

If the response is flat within 0.1 dB and the sum of all noise and artifacts is -100 dB, I'm confident calling a device transparent regardless of the nature of the artifacts.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-25 16:13:32

So what's up with that delta 66?  some missing DC blocking caps or something?


Ethan, yes, you have just recently (in here, anyway) started making a new distinction between reproduction systems and production systems.  You were most assuredly implying no inherent difference, previous.  But in any case, I'm glad you ARE disclaiming that implied equivalence now.

So, your interpretation of these graphs, is that they show that a listener should NEVER be able to hear any difference between source and soundblaster?  That's what you're inferring here, by presenting these graphs.  If a user does hear a difference, then where does that leave us?  that the differences in these plots are significant, and/or that there's another measurement that isn't in this group, that would account for the difference?

yes, yes, abx..mumble mumble, abx.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-25 16:14:57
Reading over David's posts, I just thought of something.  I have been in the small backroom of a music store testing out a speaker system where at 2 on the amp I could get hearing damage.  But when using that system outdoors as the PA for a band, 10 on the amp was no where near loud enough.. simply not enough amp + speaker power for outdoors.  However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?


As level goes up, you are less sensitive to the frequency domain (over 70dB, that is) and MAYBE more sensitive to the time domain. MAYBE more sensitive. More to the point, you're overloading the whole thing.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 16:25:54
Ethan, yes, you have just recently (in here, anyway) started making a new distinction between reproduction systems and production systems.


This distinction was repeatedly made throughout the video.

So, your interpretation of these graphs, is that they show that a listener should NEVER be able to hear any difference between source and soundblaster?  That's what you're inferring here, by presenting these graphs.


That is not inferable from anything posted. Do you even know what the words mean that you are using? It can be inferred that the Delta has a much higher noise floor. The difference is within audible range. We are talking about a recording device here, so why in hell do people insist to damage a track with noise and distortion already at recording time? Recording with a transparent system and adding noise later, where favored seems to be a much better practice. A recording device shouldn't have any "sound" of its own at all. Insisting that it should is ignorant.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 16:35:01
So what's up with that delta 66?  some missing DC blocking caps or something?

No idea, but it's 90+ dB down, and only below 2 Hz, so I'm not concerned.

Quote
Ethan, yes, you have just recently (in here, anyway) started making a new distinction between reproduction systems and production systems. You were most assuredly implying no inherent difference, previous.

Okay, then clearly the problem all along has been simple misreading on your part. I hope nobody minds if I link to the Womb to make a point. Back in January in Post #43 (http://thewombforums.com/showpost.php?p=242878&postcount=43) of the now-locked thread I distinguished between fidelity and euphonic qualities. Then later the same day in Post #46 (http://thewombforums.com/showpost.php?p=242906&postcount=46) I was as clear as humanly possible:

Quote
I'm not talking about getting a sound - I'm talking about not degrading the sound you like once you have it. My AES Audio Myths video lets you hear music after 20 passes through a $25 SoundBlaster card. I challenge anyone to do the same with a 1/2" Studer half-track and come out as good after 20 generations.

So dwoz, now that you see these links to what I said very early in the "Pathetic" thread, please tell me if you still think I have changed my stated position only recently.

Quote
So, your interpretation of these graphs, is that they show that a listener should NEVER be able to hear any difference between source and soundblaster?

I mostly avoid interpretation, preferring to present data as the basis for discussion. As I just wrote above, guys like JJ and other folks here are more expert than me at relating measured performance to what is audible. I will say that when my friend Grekim and I recorded his acoustic guitar through an Apogee 8000 and my $25 SB card at the same time, we both thought that the converters sounded basically the same.

Quote
If a user does hear a difference, then where does that leave us?  that the differences in these plots are significant, and/or that there's another measurement that isn't in this group, that would account for the difference?

Well, first we need a blind test to prove that "a user" can truly hear a difference after one SB generation. Then, if even one person can reliably pick out the copy, that means the threshold (for that person, anyway) is lower than the SB's degradation. I'm certain it doesn't mean there are more than four parameters!

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-25 17:54:17


and Ethan, I am MOST ASSUREDLY ALSO not talking about intentional distortion and effects introduced into the system.  I'm talking about the fidelity of the system.  I'm talking about how well sound was captured.

I'm simply NOT talking about intentional distortion, I never was, and you just keep saying it.

I'm pointing to the big gaping hole in the wall where the meteor came through, and you keep talking about whether or not the window is open.

As far as your invocation of the "generations" argument vis a vis the soundblaster verses the studer...I think it is a false strawman.  First off, anyone who has ever used tape knows that cascading generations is to be avoided at all cost.  Otherwise, we'd have been doing non-linear editing and mixing a LONG time ago.  But the "soundblaster" problem is a two-fold problem.  The data storage and retrieval aspects are great, but it's the conversion process itself that's damaging.  The analogy is like a train with a tall smokestack entering a tunnel.  The smokestack is sheared off and damaged the first time through, but all subsequent passes, the train fits in the tunnel just fine.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-25 17:54:33
in my opinion, you're baiting people with this approach.

Probably, but it's the truth.
Quote
Surly you recognize that one way people may want to "assess fidelity" is by how it subjectively sounds to them.

But that's not what fidelity is!

I think what I'll do is read this to indicate that you are not interested in adjusting your presentation style. I think your style limits your effectiveness but I think it is also fun for you.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-25 18:03:07
The data storage and retrieval aspects are great, but it's the conversion process itself that's damaging.


It is much less damaging than an analog Studer's conversion to a magnetic representation on a tape. The measurements are a clear proof. So what's your point?
Title: AES 2009 Audio Myths Workshop
Post by: greynol on 2010-03-25 18:03:30
it's the conversion process itself that's damaging.  The analogy is like a train with a tall smokestack entering a tunnel.  The smokestack is sheared off and damaged the first time through, but all subsequent passes, the train fits in the tunnel just fine.

And you plan on demonstrating that removal of this smoke is audible, how?

I do see a smokestack around here, and it's not from a train.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 18:23:34
and Ethan, I am MOST ASSUREDLY ALSO not talking about intentional distortion and effects introduced into the system.  I'm talking about the fidelity of the system.  I'm talking about how well sound was captured.

Okay, just to be perfectly clear, you still refuse to acknowledge that I distinguished between "what some people think sounds better" and "what is most accurate" even after showing you links to my posts from 3 months ago? And it's still your position that I have only recently "started making a new distinction between reproduction systems and production systems?" And MM says that I'm the one who needs to post a retraction!

Quote
As far as your invocation of the "generations" argument vis a vis the soundblaster verses the studer...I think it is a false strawman.

Really? Why? If a medium sounds fairly clean after one generation, but you need to assess the degradation anyway using only ears, why is a multi-generation test not suitable for both the goose and for the gander?

Quote
But the "soundblaster" problem is a two-fold problem.  The data storage and retrieval aspects are great, but it's the conversion process itself that's damaging.

You have said that countless times. Yet now, three months later, Ethan is the only person who has ever shown data. And lots of data at that! Where is the data from dwoz showing the "damage" done by one generation through a SoundBlaster card? Where are dwoz's audio example files proving "The smokestack is sheared off and damaged the first time through?" If you'd spent 1/100th as much time doing some tests as you've spent posting about my AES video in all the forums, you'd be a lot more credible.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 18:44:43
it's the conversion process itself that's damaging.  The analogy is like a train with a tall smokestack entering a tunnel.  The smokestack is sheared off and damaged the first time through, but all subsequent passes, the train fits in the tunnel just fine.

And you plan on demonstrating that removal of this smoke is audible, how?

I do see a smokestack around here, and it's not from a train.

LOL!

I also see the chances of getting anywhere with these "guaranteeing it's transparent" measurement thresholds disappearing down the tunnel too!

Though thank you Ethan for making a start.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-25 18:49:06
Quote
Quote
Surly you recognize that one way people may want to "assess fidelity" is by how it subjectively sounds to them.

But that's not what fidelity is!

I think what I'll do is read this to indicate that you are not interested in adjusting your presentation style. I think your style limits your effectiveness but I think it is also fun for you.
...but fidelity, as a word, has a meaning.

You can't just change that meaning simply because you want to call something "high fidelity" that clearly doesn't give out what you put in.

I mean, you're not seriously saying people can't ABX tape, are you?

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-25 19:41:42
Schoepenhauers list: 38 ways to  win arguments by cheating. This list is over 100 years old! (http://www.indiauncut.com/iublog/article/38-ways-to-win-an-argument-arthur-schopenhauer/)

Here's one strategy that's glaringly missing from the list:

When you are unable to defend your position, say "Go read a book, I don't have time to teach you the basics."

Someone did that to me at Gearslutz just a few minutes ago.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 22:04:57
For the record, I try to avoid them.




Catch me if you can! ;-)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-25 22:11:34
@Arnold,

usernaim has written three vital words that I didn't think to, but they are the crux of it: black box testing. That's what we need. If you mistake this for Schoepenhauer's 38 strategems, I really think we're lost.

As for "If I could just find a few good people who knew what a masking curve or an audibility curve was and how to apply it to a technical test report!" - I created an auditory model for the assessment of coded audio for my PhD - yet there are several people here I'm not worthy to wash the feet of  - so I don't think we're lacking in this respect.


Lacking in theoretical education, or lacking in application of theory to a practical sitaution?

What are the results of applying what you know about masking and the variable sensitivity of the ear with frequency to the rightmark curves you made? Presume that FS = 90 dB.

Quote
"You're asking for a lot of free consulting at a fairly high level of competence." - which is strange, because I thought nailing transparency in audio components was your life's work - certainly the reason for posting so much on the net.


My true life's work has nothing to do with any of the above.

Quote
It's not going to benefit me. I don't work in audio any more. It's just an interest for me now.

I suspect (and I'm saying this out of sadness, rather than to bate you) that it falls into the category of "too hard", so you're not willing to attempt it.


Audio is a very large area - just because one works in audio doesn;t mean that applying things like maksing are in the day's work.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 00:30:40
...When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit)

Those sound "engineers" are not real engineers. They are not likely to have engineering training, otherwise they would trust ABX instead of their gut feelings.

...that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations...

That is the same as saying the world is flat.


Well, with all due respect I'd say that you're in the position of the Catholic Church insisting that the sun goes around the earth and telling Galileo that he's full of it.

All tools have limitations. We need to understand what they are.

You know, it's interesting - in our private correspondence J_J pretty much agrees with me on this. Would you impugn HIS qualifications?
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-26 00:32:26
As 2bdecided said, exact masking thresholds wouldn't be needed. Digital audio has become so good, that quite hefty safety margins could probably be tolerated without necessarily excluding too much gear. So why not just start and juggle some numbers? 2bdecided started with -120dB, Ethan could live with -100dB. What would be (not the maximum possible but just) a safe translation to FR, THD, IMD, IR, etc.?
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 00:45:45
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)

When they make such claims without an ABX test to me it would seem the "engineers" are talking out of their arse. If the difference is so obvious it would be no problem to do it double blinded no? The limitations of an ABX seems to just be that it doesn't give "engineers"  and their following their special feeling of being better than the rest.

Quote
We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification.

We do this all this time, it's called placebo affect. Not all observations are equal.

Quote
The Catholic Church tried that with Galileo.

That's funny from someone who falsely started to preach about the limitations about ABX. Make no mistake about it, you are the church here.

Quote
A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.

I think you have shown here you don't know the first thing about scientific method.


You may think what you want, it's a free country. I'm not denying you the right to practice whatever religion you see fit - just don't try to sell it to me as science. I don't believe in creationism or astrology either. Or the assertion that tying little bags of pretty rocks to your speaker cables will make your stereo sound better. Although if you happen to believe that I've got a lot of scrap opal for sale.

Understand, EVERYTHING has limitations. Electron microscopes have limitations. The Hubble space telescope has limitations. My Tektronix scope has limitations. Why would you possibly believe that ABX is the only measuring tool that doesn't?

A real scientist, such as J_J (whom I have a great deal of respect for) is interested in discovering what the limitations are and devising means of surpassing them. He doesn't go around denying evidence just because it doesn't agree with his preconceptions. He does, however, think I'm kind of silly for discussing it with religious fanatics.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 00:59:32
BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.


That would probably be a TOS 8 violation.


Why would applying proper maintainance to the amplifier be a TOS 8 violation? Do you understand what "adjusting the bias" means? Do you understand why replacing electrolytic capacitors is necessary every few years to retain performance?

Do you really think that comparing a high quality device that is old, needs repair, and is operating out of spec to a new, cheaply build device that is virtually brand new is a fair test?

A new Camry will beat a Ferrari if the Ferrari hasn't had a tune up or oil change in 10 years. In fact, the Ferrari probably won't even start.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 01:04:51
However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?
In the end it's the SPL at the ear that could make the difference, not necessarily the power of the PA system. You can also go pretty loud with headphones.
I expect more effect from the room acoustics (reverberation) on the masking properties of the codec. It should be possible to simulate that at home with a reverberation (e.g. convolution) plugin.


Um, not exactly. You'll have far less (if any) reverberation outdoors than you will in any enclosed environment short of an anechoic chamber. Outdoors you MIGHT have an echo if there's a large wall of mountain behind you, but actual reverberation? No. (Reverberation is multiple delays, acoustically multiplexed by multiple reflective paths in an enclosed space. That doesn't exist outdoors.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 01:13:58
...snip stuff about valves/tubes...

The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes...

Thanks John.

My point was that these measurements or characteristics are supposed to be defining audio equipment - or at least defining a threshold beyond which we can be sure it's transparent.

For the purposes of this discussion, I don't care whether it's because the tubes are broken, or they all do that - I care about whether the defined measurements catch this fault.

Since no one has been brave enough to properly define the measurements yet, we can't be sure.

Cheers,
David.


In the case of tube microphonics the effects will show up in measurements of both harmonic distortion and signal to noise. (They won't function as reverb because the effects will center around a fairly sharp peak caused by the mechanical resonance of the faulty tube. It'll be more like exciting a resonance string on a sitar - a sympathetic vibration.)

However I still maintain that it's not a fair test of the equipment, as it's malfunctioning.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 01:24:50
the metal in the valves sings along with the music ... tap the valves, you can hear the tapping through the speaker.

I suppose you could call that reverb, but I call it ringing because it has a single dominant frequency. Either way, it's one very good reason to avoid tubes in all audio gear.


--Ethan


I would say that it's a very poor reason to avoid tubes in audio gear.

It is a very good reason to keep your audio gear properly maintained and to avoid defective parts.

If you can't maintain your car (yourself or with the help of a mechanic) you shouldn't own one. Audio equipment is the same way. Particularly if it's professional audio equipment. (although I suspect that for a lot of people on this site that's not really the class of gear we're talking about?)

And no, it's not reverb, which consists of acoustical reflections in a medium. This is resonance. Ethan, of all people I would expect you to understand the difference.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-26 01:35:42
You can't just change that meaning simply because you want to call something "high fidelity" that clearly doesn't give out what you put in.

I mean, you're not seriously saying people can't ABX tape, are you?

Sure, they can tell the difference.

The next question is which would they label high-fidelity? For me "high-fidelity" brings to mind my grandfather's open reel tube rig. That's the sound and era I associate with "high-fidelity".

Why not be more explicit and ask which has more "accurate reproduction"?
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-26 01:39:33
The next question is which would they label high-fidelity? For me "high-fidelity" brings to mind my grandfather's open reel tube rig. That's the sound and era I associate with "high-fidelity".

Why not be more explicit and ask which has more "accurate reproduction"?
Because high fidelity means accurate reproduction.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 01:53:30
There's an old saying that to a sufficiently uneducated mind, modern technology appears to be magic. A corolary seems to be that to  a sufficiently uneducated mind, modern technology appears to be stupid.

FYI it's not an "old saying" - it's a quote from the great writer and futurist Sir Arthur C. Clarke. And the proper quotation is:
Quote
Any sufficiently advanced technology is indistinguishable from magic.

Please note that Clarke said nothing about "uneducated minds".

http://www.quotationspage.com/quote/776.html (http://www.quotationspage.com/quote/776.html)

Your corollary would appear to be your own nonsense.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 02:06:30
Ah, but in the case of a pipe organ you're looking at the wrong thing.


Says who?

I was up in some pipe organ chambers last night, and what I saw belied just about everthing that you say.


Quote
In the case of a pipe organ the pipes do not function as single units, they are part of an array inside a tone cabinet.


Incorrect. Some pipes are in the open, and some are in cabinets. Furthermore, the cabinets are generally at least partially open in actual use.

Furthtermore, my discusison was of bass tones and subwoofers, and the corresponding pipes in a pipe organ are always out in the open.

Finally, the purpose of the cabinets is to be a sort of acoustic EFX box, IOW they are there to intentionally and discretionarily distort the sound. That puts your discussion of them in the same category as someone who complains about the poor frequency response of tone controls when placed off-center.


Arnold, you have to understand what you're looking at.

As I explained, the "exposed pipes" in an organ are in a chamber that consists of the room itself, as would be required by the long wavelengths involved. The principles I discussed are still valid - IM exists. (perhaps not if the organ was installed in a open field, but how many such installations do you know of?)

Your problem is that you're defining the system to suit your own purposes, which in this case involves ignoring part of it.

Kind of like trying to define a stringed instrument without the body........ Comprendez? Pipes = strings/room = body?

Please reread my previous post.

BTW, the purpose of the cabinets is to provide a means of volume control, as well as a resonant environment for the particular pipe grouping.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 02:14:08
The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes


This is incorrect. Virtually all tubes such as those commonly used in legacy audio and for EFX, are far, far more microphonic than their SS equivalents.  Tubes need not be defective tubes to be microphonic. There's a reason why shock mounts have been commonly used with tubes in critical applications all along.



Wrong. Microphonic tubes are defective. The unfortunate fact is that the vast majority of tubes are defective to some degree. As to whether the defects are significant depends on the application. Since companies do not allow the return of tubes once they've been plugged in we have to make do with what we can get.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-26 03:21:24
The next question is which would they label high-fidelity? For me "high-fidelity" brings to mind my grandfather's open reel tube rig. That's the sound and era I associate with "high-fidelity".

Why not be more explicit and ask which has more "accurate reproduction"?
Because high fidelity means accurate reproduction.


Yeah, but accurate to what?  No, that's not an idle question. With electronics, or a codec, the answer is clear. With respect to an entire chain from performer to listening room, not as much.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 06:10:32
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?

So you haven't watched my AES Audio Myths Workshop (http://www.youtube.com/watch?v=BYTlN6wjcvQ) video either? At 41:28 it shows specs side by side for a typical consumer sound card versus a Studer A-810 recorder. Guess which wins in every single category? Now, you could argue that the sound card specs are not very complete, and you'd be right. But what part of 109 dB s/n versus 74 dB (best case for the Studer) is confusing? My recent simple test with sine waves also shows a lot of info all at once in just a few FFT graphs, repeated here below for your convenience. The top graph shows the noise for one record/play generation, and the lower series of graphs shows distortion and noise for the original test tones and two sound cards.

--Ethan

(http://www.ethanwiner.com/misc-content/sound_card_sb_noise.gif)

(http://www.ethanwiner.com/misc-content/sound_card_distortion_corrected.gif)



Which is supposed to prove exactly what?

What do sine waves have to do with real world audio signals?

And frankly, I'm glad to put up with a little bit of noise if the recorded audio sounds better. That's one of the problems with defining THD as a component of S/N, BTW. The S/N spec doesn't actually tell you anything at all about how it SOUNDS.

Measurements are meaningless if you don't know how to interpret them.

And aren't "charts and graphs" a TOS #8 violation?
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 06:12:53
I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well. The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.

Who is this "we" that you speak of?

--Ethan


All of us. That includes you.
Title: AES 2009 Audio Myths Workshop
Post by: stephanV on 2010-03-26 06:40:02
You may think what you want, it's a free country. I'm not denying you the right to practice whatever religion you see fit - just don't try to sell it to me as science. I don't believe in creationism or astrology either. Or the assertion that tying little bags of pretty rocks to your speaker cables will make your stereo sound better. Although if you happen to believe that I've got a lot of scrap opal for sale.

Well good for you!

Quote
Understand, EVERYTHING has limitations. Electron microscopes have limitations. The Hubble space telescope has limitations. My Tektronix scope has limitations. Why would you possibly believe that ABX is the only measuring tool that doesn't?

I'm not saying that ABX does not have limitations, but it does not have the limitation that you seem to attribute to it. Meaning hearing a clear difference in sound in a non-blinded situation that magically disappears during an ABX test.

Quote
A real scientist, such as J_J (whom I have a great deal of respect for) is interested in discovering what the limitations are and devising means of surpassing them. He doesn't go around denying evidence just because it doesn't agree with his preconceptions. He does, however, think I'm kind of silly for discussing it with religious fanatics.

So when are you going to provide some real arguments that are not ad hominems and data that is not anecdotal? Or are you going to keep behaving like a religious fanatic? I think I already know the answer.
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-26 07:28:59
Ethan, you need much longer analysis windows on your spectrum plots, especially those at low frequencies. Also, for higher frequencies you might use uniform frequency scales to make looking for harmonics easier.


Also, try this, make a set of tones of 250Hz + n * 500Hz (for n integer).  Use them all at once, do not overload, and plot the output, looking for anything at 500Hz.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 08:15:38
As far as your invocation of the "generations" argument vis a vis the soundblaster verses the studer...I think it is a false strawman.


No it is not. In fact you provide what I find to be convincing evidence to support Ethan's claims right here:

Quote
First off, anyone who has ever used tape knows that cascading generations is to be avoided at all cost.  Otherwise, we'd have been doing non-linear editing and mixing a LONG time ago.


Exactly. That tells me that the Studer analog tape machine is a very troubled piece. It may still sell for over $100,000 but you couldn't get me to use it as compared to my favorite digital audio interfaces that cost pennies on the dollar, no way.

Quote
But the "soundblaster" problem is a two-fold problem.


No Ethan's demo shows that there is fact little if any problem with it at all.

If you actually listened to Ethan's generations demos, you'd realize that it can do what the Studer can't do - handle re-recording running into more generations than was ever even dreamed of back in the days when analog tape was all we had.

Quote
The data storage and retrieval aspects are great, but it's the conversion process itself that's damaging.


That would be a TOS 8 violation. Persist in it and I'll cheer while the moderators run you out on a rail. You haven't done your homework. You are just reciting what you've been programmed to believe.

Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 08:23:24
Ethan, you need much longer analysis windows on your spectrum plots, especially those at low frequencies. Also, for higher frequencies you might use uniform frequency scales to make looking for harmonics easier.


Which explains why the 20 Hz plots have so much higher skirts than the ones for higher frequencies. I believe that a different choice of windowing could also help. Anything but Hamming!  Hanning, Blackmann, Blackmann-Harris...

Quote
Also, try this, make a set of tones of 250Hz + n * 500Hz (for n integer).  Use them all at once, do not overload, and plot the output, looking for anything at 500Hz.


And now folks, we bring out the multitones. ;-)  Useful little buggars, they are!  I literally built www.pcavtech.com on them.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 08:46:57
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


<snip technical test results showing how some fairly inexpensive cards perform at other frequencies than 1 KHz>

Which is supposed to prove exactly what?


That despite your unwarranted fears John, the fairly inexpensive cards perform well at frequencies other than 1 KHz. 

As JJ later points out, the 20 Hz plots would be a lot more impressive if Ethan had used a more complemntary windowing technique. His software probably support it.

Quote
What do sine waves have to do with real world audio signals?


That has been explained to you already. All audio is actually composed of sine waves. Fourier proved it about a century or more ago.  The basic idea behind using sine waves is that it only takes a few of them, sometimes only one or two, at strategically chosen frequencies, to probe around and find out how the equipment works with many concurrent sine waves, AKA music.  Finding disortion is a little like finding rotten teeth - you don't need to probe every square inch of every tooth. Some strategic probling in the places where the problems tend to accumulate, and you can make reasonable statements about the rest of the teeth.

Quote
And frankly, I'm glad to put up with a little bit of noise if the recorded audio sounds better.


Whether or not you know it, that would make you a fan of MP3s. ;-)  Seriously, what the better MP3 encoders do is selectively corrupt music with noise. That shows up in the Rightmark tests of encoders that were recently shown here.

Quote
That's one of the problems with defining THD as a component of S/N, BTW. The S/N spec doesn't actually tell you anything at all about how it SOUNDS.


Except it does. The four parameters, if interpreted wisely, tell you exactly how things sound. The easiest way to interpret them is to use them to determine which components that have no sound - the ones that are sonically transparent.

For example, if the dynamic range is >100 dB,  the SNR is > 100 dB, the frequency response is +/- < 0.1 dB, and if the THD and IM for all possible combinations of signals is 100 dB down or more (0.001% or less) and its a simple component like an amp or audio interface, then the component is sonically transparent - slam dunk!

Quote
Measurements are meaningless if you don't know how to interpret them.


But, some of us know how to interpret them. All one has to do is apply general knowlege about the sensitivity of the human ear (Fletcher Munson) and masking to the FFT plots and it is pretty obvious.

Quote
And aren't "charts and graphs" a TOS #8 violation?


No. Read TOS 8 - its about making claims about audible differences. If you want to claim that you hear no differences, no supporting proof is required.

BTW John you should be the last one to complain about TOS 8 - you've been getting a monster free pass despite your many infractions of it.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 08:53:51
The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes


This is incorrect. Virtually all tubes such as those commonly used in legacy audio and for EFX, are far, far more microphonic than their SS equivalents.  Tubes need not be defective tubes to be microphonic. There's a reason why shock mounts have been commonly used with tubes in critical applications all along.



Wrong. Microphonic tubes are defective.


Another logical flaw. Excessively microphonic tubes are defective. All tubes are microphonic to a greater degree than even mediocre SS.

Actually, just about everything is microphonic if you measure sensitiviely enough. I've measured the microphonics of wire, for example. Did you know that it is possible to unintentionally make wire that is more microphonic than some tubes?

Quote
The unfortunate fact is that the vast majority of tubes are defective to some degree. As to whether the defects are significant depends on the application. Since companies do not allow the return of tubes once they've been plugged in we have to make do with what we can get.


John, in the midst of this you actually said something right for a change. Whether the defects are signficant depends on the application. The only reason why tubes ever were acceptiable for audio is that audio is basically meatball surgery.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 09:02:02
I mean, you're not seriously saying people can't ABX tape, are you?


We *know* that people can ABX tape:

A friend of mine ABXs analog tape and wins. Really pretty good analog tape! (http://home.provide.net/~djcarlst/abx_tapg.htm)

The point is that unlike some people, other people have done their homework. I'll bet money that so far, Dwoz and 2Bdecided have *not* actually ABXed Ethan's files.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 09:07:45
If the response is flat within 0.1 dB and the sum of all noise and artifacts is -100 dB, I'm confident calling a device transparent regardless of the nature of the artifacts.


I'll meet your 100 and  0.1, and bid 80 and 0.2. ;-)

If I get to be picky about  some things, even 70 and 3 can work.

You know vinyl generally measures just horrible, and can sound almost OK. Not that it can pass ABX.

That's because people had a century or so to hide a lot of the decay well below the "gum line".
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 09:20:54
As 2bdecided said, exact masking thresholds wouldn't be needed. Digital audio has become so good, that quite hefty safety margins could probably be tolerated without necessarily excluding too much gear. So why not just start and juggle some numbers? 2bdecided started with -120dB, Ethan could live with -100dB. What would be (not the maximum possible but just) a safe translation to FR, THD, IMD, IR, etc.?


Here are some reasonable assumptions:

(1) Clipping and broken or otherwise pathological equipent and other situations are avoided.

(2) Equipment tends to slide into the mud, not drop off a cliff. You rarely find 0.01 THD at 1 KHz and 10% THD at 1,001 Hz, all other things being equal, especially if you avoid clipping.

(3) Linear distoriton and nonlinear distortion tends to be worse at the frequency extremes, where fortunately, the ear tends to be a lot less sensitive.

(4) We are actually listening to music for enjoyment, not trying to collect pathological musical selections to make a point.

(5) People tend to operate equipment well below maximum power and SPL capabilties, and in rooms with some background noise and suboptimal acoustics.

Then, relaxed specs like 80 dB dynamic range and +/- 1 to 3 dB frequency response can be sonically transparent.

BTW 80 dB DR inherently presumes that all IM and THD are 80 dB down or better.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 09:39:47
It is my contention that the state of the art of audio measurement and the state of the science of human audio perception at tis time are not accurate enough really adequately quantify what we're discussing.


John, it is easy to show that your ideas about state of the art of audio measurement and the state of the science of human audio perception  are from the stone age.


Quote
I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well.


And in ways that it is quite clear you have no working knowlege of. At least you've done a good job of hiding it and ignoring it when pointed out to you.

Quote
The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.


Again John, at your apparent hihgly limited level of understanding, that might be true.

There seem to be a lot of misapprehensions about what we don't know about these things - apparently even from people who should know better, as in they earned PhDs in related areas.

Quote
In terms of audio perception some aspects are fairly well understood, but other aspects - specifically how the brain handles perceptual information and how the perceptual systems encode information for transmission from the primary sensory organs to the brain are currently the subject of some very interesting research.


Research is ongoing, but a lot is already known. If we wait for the research to stop as you seem to be advising, we'll die without doing good things we can do now.

Quote
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife.


Yes, there are a lot of people massively pontificating here who are the equivalent of dull knives.

Quote
When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)


John, you are picking your pros from a nest of bozos. There are pros who follow ABX very carefully. They just don't necessarily post a lot at places like gearslutz and the womb.  Places like the womb very obviously want to keep inconveneient truths away from its inmates.  They ban people who tell inconvenient truths. Sometimes after only 2 posts.

Quote
Science is supposed to be based on observation. We observe the world around us. We study our observations. We construct hypotheses explaining our observations and compare them to the behavior of the world; when they appear to fit they become standard (more or less) theory. We conduct tests of the theory and, as our technology progresses enough to provide sufficiently accurate tests, we prove the theory and it becomes law, or, unproven, it remains theory until a better explanation comes along.


Right, which was the thinking behind the development of ABX.

Quote
We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification. The Catholic Church tried that with Galileo.


The guys who are still worshipping the $100,000 Studers  and banning people who tell inconvenient truths are the modern day equivalent of the 15th century pope.

Quote
A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.


Even worse are the people who don't know what the body of knowlege is, and treat their ignornace like scripture. Welcome to the audiopile and pro audio forums that either ban discussion of ABX or redicule it incessantly when it is mentioned.

Quote
(Please note that I am not endorsing ghosties, faeries, wizards, or  expensive sculptures that make your dentist's stereo sound better......)


No, we're talking about expensive sculptures that make some recording engineers LPs sound better...

You know there is a school of thought that says that Studer puts rediculous prices on some pieces of audio sculpture that people demand they make, simply because they want to stop the madness, and they *are* a business.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 10:07:45
Schoepenhauers list: 38 ways to  win arguments by cheating. This list is over 100 years old! (http://www.indiauncut.com/iublog/article/38-ways-to-win-an-argument-arthur-schopenhauer/)

Here's one strategy that's glaringly missing from the list:

When you are unable to defend your position, say "Go read a book, I don't have time to teach you the basics."



Missing?

Maybe not said explicitly, but there by implication.

14 Try to bluff your opponent.

15 If you wish to advance a proposition that is difficult to prove, put it aside for the moment.

18 If your opponent has taken up a line of argument that will end in your defeat, you must not allow him to carry it to its conclusion.

25 If your opponent is making a generalization, find an instance to the contrary.

28 When the audience consists of individuals (or a person) who is not an expert on a subject, you make an invalid objection to your opponent who seems to be defeated in the eyes of the audience.

29 If you find that you are being beaten, you can create a diversion--that is, you can suddenly begin to talk of something else, as though it had a bearing on the matter in dispute.

30 Make an appeal to authority rather than reason.













Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 10:21:38
BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.


That would probably be a TOS 8 violation.


Why would applying proper maintainance to the amplifier be a TOS 8 violation?



Lack of substantiation of a claim of improved sound quality.

Quote
Do you understand what "adjusting the bias" means?


LOL! I was adjusting bias in 1960. I built the amp from scratch from my own design. You really don't know who you are talking to, do you? ;-)

Have you ever done sensitivity tests on bias adjustments. IOW adjusted and misadjusted bias to see what effect it had on audible and measurable perforamnce?

I have done this, of course.

Believe it or not, changing the oil does not make a difference in how a car runs, if the oil is still OK.

Quote
Do you understand why replacing electrolytic capacitors is necessary every few years to retain performance?


That is a patently false claim. 

I have equipment with electrolytics that is > 20 years old that meets and even exceeds origional spec. Sounds good, too. 

However, on a few rare occasions I've also had to recap equipment that grotesquely failed to meet spec after 2 years, and sounded pretty bad. The bad sound put the gear on the test bench and the test bench told all. The caps were at 10% of less of marked capacitance.

Quote
Do you really think that comparing a high quality device that is old, needs repair, and is operating out of spec to a new, cheaply build device that is virtually brand new is a fair test?


Of course not. That is both insulting and also an excluded-middle argument.

Quote
A new Camry will beat a Ferrari if the Ferrari hasn't had a tune up or oil change in 10 years. In fact, the Ferrari probably won't even start.


If you store a car properly, the oil will be just fine after a decade or more. The most common reason why cars don't start easily after long periods of storage is that the fuel drains or evaporates from places like the carb.

Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-26 10:22:40
The next question is which would they label high-fidelity? For me "high-fidelity" brings to mind my grandfather's open reel tube rig. That's the sound and era I associate with "high-fidelity".

Why not be more explicit and ask which has more "accurate reproduction"?
Because high fidelity means accurate reproduction.
Exactly.

Whereas Notat seems to be looking for an audio processor that makes things sound like a respected 1960s audio system. That processor could in fact be a respected 1960s audio system

Point is (and it's been made 20 times already) that's outside the scope of this discussion. We're discussing things that (hopefully!) don't change the sound, not things that do.

If you're claiming your 1960s audio system more accurately reproduces the sound at its input than a digital recorder and solid state amplifier, then that's within this discussion. We should be able to test both, and see whether none, one, or both of them are "transparent". If one or both are transparent, we've answered the question.  In the case that neither are transparent, we may be able to say which is "least bad".

If you're claiming you 1960s audio system sounds nicer to you than a digital recorder and solid state amplifier, then that's outside the discussion.

It's not irrelevant - obviously when you are choosing your home stereo, it's probably highly relevant.

But for sanity, we have (IMO!) to determine whether things change the sound, or not. Then you can pick any sound changing things you like - and when you have the sound you want, you can add other non-sound-changing things without losing the sound you want.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 10:24:52
Let's see - a few posts back you said the Delta card had noise 90dB down below 2K. So let's take 20 iterations of that.
90,87,84,81,78
75,72,69,66,63 (oops, worse than the Studer already)
60,57,54,51,48
45,42,39,36,33
30,27,24,21,18

That's right, after 20 iterations the noise floor of your Delta card is 18dB down below 2kHz. I'd say that's pretty significant, wouldn't you? In fact, I'd say that performance is pretty poor - far from "transparent".


I take it that you still haven't done any ABX listening tests of Ethan's files. Doing file based ABX on a computer is relatively easy. People who avoid it tell us implicilty about their real-world technical compentence and intellectual curiousity. Anybody can pick numbers off of a plot.


Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-26 10:25:49
Let's see - a few posts back you said the Delta card had noise 90dB down below 2K. So let's take 20 iterations of that.
90,87,84,81,78
75,72,69,66,63 (oops, worse than the Studer already)
60,57,54,51,48
45,42,39,36,33
30,27,24,21,18

That's right, after 20 iterations the noise floor of your Delta card is 18dB down below 2kHz. I'd say that's pretty significant, wouldn't you? In fact, I'd say that performance is pretty poor - far from "transparent".
Noise doesn't add like that.

If you want to learn how it really works, that's good. I'm sure someone can explain it (and the real-world test itself is pretty trivial, even if you don't want to work through the maths).

If you don't want it understand it properly - i.e. if you want to maintain your current level of ignorance and argue from that position - then you'll disqualify yourself from this discussion.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-26 10:35:38
<snip soundblaster plots>

Which is supposed to prove exactly what?

What do sine waves have to do with real world audio signals?

And frankly, I'm glad to put up with a little bit of noise if the recorded audio sounds better.
You can't find something to record which that doesn't introduce noise or doesn't wreck the signal? How sad.

Maybe it sounds better to you because you like noise?

Quote
That's one of the problems with defining THD as a component of S/N, BTW. The S/N spec doesn't actually tell you anything at all about how it SOUNDS.

Measurements are meaningless if you don't know how to interpret them.
If there's an aspect of the sound blaster's performance that causes it to be ABXable, then this will show up in appropriate measurements.

FWIW to get these "decent" results, you have to use the right sound blaster card (quite an early one IIRC), in the right PC set-up, with the right drivers and software, at the "right" sample rate, and still keep away from 0dB FS.

If anyone thinks these measurements suggest that all cards from the Creative Sound Blaster range, in all circumstances, will measure and/or sound this good, they're totally wrong.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-26 10:43:50
What are the results of applying what you know about masking and the variable sensitivity of the ear with frequency to the rightmark curves you made? Presume that FS = 90 dB.
I didn't make them. I wouldn't asses audio codecs in this way. And now you're making me part of the measuring equipment! How on earth can that work?!


Anyway, I won't duck your question: Looking at the curves, my experience tells me (even without the labels) that it's gone through an MPEG-like filterbank and/or psychocoustic model based noise addition process. Hence my experience tells me that sine wave tests are pretty much irrelevant in this context (they won't reveal the faults of the codec), therefore these graphs are pretty much irrelevant, and whether the codec under test is any good will have to be determined using another method altogether.


Anyway, googlebot keeps explaining the point very patiently and compactly, so I don't think I need to repeat it.


Some numbers have emerged in this thread. That's a good start.

So let's be more rigorous: what is the test signal, how is it analysed.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 10:48:35
Let's see - a few posts back you said the Delta card had noise 90dB down below 2K. So let's take 20 iterations of that.
90,87,84,81,78
75,72,69,66,63 (oops, worse than the Studer already)
60,57,54,51,48
45,42,39,36,33
30,27,24,21,18

That's right, after 20 iterations the noise floor of your Delta card is 18dB down below 2kHz. I'd say that's pretty significant, wouldn't you? In fact, I'd say that performance is pretty poor - far from "transparent".


Noise doesn't add like that.

If you want to learn how it really works, that's good. I'm sure someone can explain it (and the real-world test itself is pretty trivial, even if you don't want to work through the maths).

If you don't want it understand it properly - i.e. if you want to maintain your current level of ignorance and argue from that position - then you'll disqualify yourself from this discussion.



Good point, David. I thought that he had picked those numbers off of some plot someplace. It appears that John picked those numbers from of where the sun shines not. ;-)

The larger point is important. Noise doesn't drop 3 dB every iteration because after a few iterations the noise sources are no way equal. The 3 dB rule only works for equal intensity noise sources.

For example, when the noise sources differ by 10 dB, the sum drops only a few tenths below the larger noise.

Here's the sequence from my noise sums spreadseet, starting at 90 dB:

1 -86.990  (3.01 dB drop)
-85.229
-83.979
-83.010
-82.218
-81.549
-80.969
-80.458
-80.000
-79.586
-79.208
-78.861
-78.539
-78.239
-77.959
-77.696
-77.447
-77.212
-76.990
20 -76.778 dB  (ca. 13 dB drop total, but only .322 dB drop for the iteration.

Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-26 10:59:55
Let's see - a few posts back you said the Delta card had noise 90dB down below 2K. So let's take 20 iterations of that.
90,87,84,81,78
75,72,69,66,63 (oops, worse than the Studer already)
60,57,54,51,48
45,42,39,36,33
30,27,24,21,18

That's right, after 20 iterations the noise floor of your Delta card is 18dB down below 2kHz. I'd say that's pretty significant, wouldn't you? In fact, I'd say that performance is pretty poor - far from "transparent".


Noise doesn't add like that.

If you want to learn how it really works, that's good. I'm sure someone can explain it (and the real-world test itself is pretty trivial, even if you don't want to work through the maths).

If you don't want it understand it properly - i.e. if you want to maintain your current level of ignorance and argue from that position - then you'll disqualify yourself from this discussion.



Good point, David. I thought that he had picked those numbers off of some plot someplace. It appears that John picked those numbers from of where the sun shines not. ;-)

The larger point is important. Noise doesn't drop 3 dB every iteration because after a few iterations the noise sources are no way equal. The 3 dB rule only works for equal intensity noise sources.

For example, when the noise sources differ by 10 dB, the sum drops only a few tenths below the larger noise.

Here's the sequence from my noise sums spreadseet, starting at 90 dB:

1 -86.990  (3.01 dB drop)
-85.229
-83.979
-83.010
-82.218
-81.549
-80.969
-80.458
-80.000
-79.586
-79.208
-78.861
-78.539
-78.239
-77.959
-77.696
-77.447
-77.212
-76.990
20 -76.778 dB  (ca. 13 dB drop total, but only .322 dB drop for the iteration.


OK, my bad. It's 4AM here and I have insomnia........ not thinking as clearly as I should be.

That Delta still has an awful lot of noise for an allegedly "perfect", "transparent", etc, device.

However, you guys seem to be missing at least one part of the equation - a good part of audibility has to do with the relative frequencies of the spurious products relative to the signal and how they relate harmonically. The less harmonically related they are, the more audible they become. So the actual "noise floor" (including THD as part of S/N) can be a very misleading figure.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 11:15:01
However, you guys seem to be missing at least one part of the equation - a good part of audibility has to do with the relative frequencies of the spurious products relative to the signal and how they relate harmonically.


Now I didn't get a PhD flogging masking curves, but all of my readings of them say that harmonic relationships don't matter.

I think the reason for much of what we observe is that most musical sounds generate a lot of maskers, and they are often (but not allways harmonically related.

Therefore music generates a lot of maskers for distortion that is harmonically related, and not so much for dstortion that is inharmonic.

And that would be one reason why I keep harping on IM - because IM generates a mix of distortion products that are both harmonic and inharmonic.

Quote
The less harmonically related they are, the more audible they become. So the actual "noise floor" (including THD as part of S/N) can be a very misleading figure.


Actually, we rarely depend on noise to mask harmonics. The most common masker of harmonics created by equipment faults is harmonics in the music. Remember that many instruments generate more harmonics than the fundamental!
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-26 11:42:32
Now I didn't get a PhD flogging masking curves, but all of my readings of them say that harmonic relationships don't matter.
Actually, if you get raw data on individuals for tone-on-tone masking, there's some really strange stuff there - dips and peaks. It's probably not "harmonic" in the context of this discussion. It's probably beat frequencies vs in-ear distortion products vs critical bandwidth.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-26 13:00:27
Quote
Because high fidelity means accurate reproduction.
Soundblaster is therefore the "tiger woods" of the audio chain.
Sarcasm really has no place in this discussion. The argument doesn't centre around whether Sound Blaster is high-fidelity, it centres around whether Sound Blaster is high-fidelity enough. If we're looking for a good benchmark on what I consider high-fidelity, I would say something built around a Texas Instruments PCM1794A: http://focus.ti.com/docs/prod/folders/print/pcm1794a.html (http://focus.ti.com/docs/prod/folders/print/pcm1794a.html)

However, I am hardly a professional electrical engineer and there may be higher-fidelity solutions out there. This is just the best solution that I've been able to find so far in my never-ending quest for audio knowledge.

On a moderation note: If there are specific instances where you think posters are in breach of the terms of service, please use the Report button. It's not always easy for us to discern which violations that you users have identified in the thread, and it helps us sort out public opinion. I am not particularly happy with all the ad hominem in this thread, but it is being employed by users that I tend to have a degree of respect for on an intellectual level, so my own approach is to wait for the flames to die down as they usually do.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-26 13:34:48
Quote
Because high fidelity means accurate reproduction.
Soundblaster is therefore the "tiger woods" of the audio chain.
Sarcasm really has no place in this discussion. The argument doesn't centre around whether Sound Blaster is high-fidelity, it centres around whether Sound Blaster is high-fidelity enough. If we're looking for a good benchmark on what I consider high-fidelity, I would say something built around a Texas Instruments PCM1794A: http://focus.ti.com/docs/prod/folders/print/pcm1794a.html (http://focus.ti.com/docs/prod/folders/print/pcm1794a.html)


At this point it appears that the disagreement would be over word forms. The PCM1794 is probably the *highest* fidelity chip around or close to it. So you're not asking for high fidelity, you are asking for highest fidelity.

Based on its low-volume asking price of about $16, it would appear to be a likely candidate for a 2-channel computer audio interface in the $500 range.

I'm sure you know that as a device for listening to commercial recordings, it is a study in the futility of overkill.  It might have a logical place in some higher end lab gear.  For general istening, everything after 100 dB is vanity. This one can be peaked up to 130-ish dB.

Also, the usual audiophool pubs will go on and on about how it has "chocolate-like" highs.

The nature of things is that in 3-5 years (if not today) the piece price will go down to something like $5 or less, and now it will show up in stuff in the under-$200 range.  A while after that a commodity verison of the same technology will be sold in large volumes for a buck or less.

Too bad you'd be forced to use this thing with grossly inferior audio gear such as your ears! ;-)


Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-26 15:21:05
I've split off some of the ad hominem back-and-forth into the Recycle Bin.

I'm not going to tolerate any of it in this thread from here on. Any post that even makes me think it could be taken personally will be binned with no consideration to its technical value.

C'mon guys, we can do better than this.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-26 15:53:06
(4) We are actually listening to music for enjoyment, not trying to collect pathological musical selections to make a point.

(5) People tend to operate equipment well below maximum power and SPL capabilties, and in rooms with some background noise and suboptimal acoustics.

Then, relaxed specs like 80 dB dynamic range and +/- 1 to 3 dB frequency response can be sonically transparent.


I agree with all points except those two. We live in times where >100 dB dynamic range begins to come free with entry level mainboards. So it has become just unnecessary to constrain a black box specification to (what you call) "usual" use cases. The whole intention of the "black box proposal", as I understand it, is that a component could be called "transparent" with a "end of discussion" surety. Not limited to "usual" music, not limited to >30 dB listening environments. Transparent should mean transparent, wether you are a guy in your mansion's basement with a favor for academic, contemporary sound composition on $50000 speakers  or a regular guy enjoying country music in his living room. Allowing such may have been a constraint in the past, because it would have meant that only (severely over-engineered) high-end gear would have passed the test, but I think the world has moved on since then. Calling anything in the 80 dB region already transparent just opens the door for elitists calling it a poor man's spec sufficient for mass media consumption. But nowadays a poor man with good advice can buy gear, that could be called transparent by much higher standards than that.
Title: AES 2009 Audio Myths Workshop
Post by: botface on 2010-03-26 17:20:42
(4) We are actually listening to music for enjoyment, not trying to collect pathological musical selections to make a point.

(5) People tend to operate equipment well below maximum power and SPL capabilties, and in rooms with some background noise and suboptimal acoustics.

Then, relaxed specs like 80 dB dynamic range and +/- 1 to 3 dB frequency response can be sonically transparent.


I agree with all points except those two. We live in times where >100 dB dynamic range begins to come free with entry level mainboards. So it has become just unnecessary to constrain a black box specification to (what you call) "usual" use cases. The whole intention of the "black box proposal", as I understand it, is that a component could be called "transparent" with a "end of discussion" surety. Not limited to "usual" music, not limited to >30 dB listening environments. Transparent should mean transparent, wether you are a guy in your mansion's basement with a favor for academic, contemporary sound composition on $50000 speakers  or a regular guy enjoying country music in his living room. Allowing such may have been a constraint in the past, because it would have meant that only (severely over-engineered) high-end gear would have passed the test, but I think the world has moved on since then. Calling anything in the 80 dB region already transparent just opens the door for elitists calling it a poor man's spec sufficient for mass media consumption. But nowadays a poor man with good advice can buy gear, that could be called transparent by much higher standards than that.

But psychoacoustics have to come into it. EG if distortion below a certain level is inaudible a piece of gear with distortion below that threshold is transaparent as far as that parameter goes. Another piece of gear with more zeros in front of it in the spec isn't any more transparent.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-26 17:50:20
But psychoacoustics have to come into it. EG if distortion below a certain level is inaudible a piece of gear with distortion below that threshold is transaparent as far as that parameter goes. Another piece of gear with more zeros in front of it in the spec isn't any more transparent.


I haven't said that. What's your point?

Going for only 80 dB is just so close to the absolute thresholds of human hearing, that it wouldn't take much effort to produce a positive ABX result for any device incapable of delivering more than that. Since the call was for a black box to determine transparency and not a question about what's sufficient for great music listening pleasure, I just found 80 dB to be not enough. Especially when taking into consideration how much more even budget gear is able to deliver nowadays.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-26 20:03:51
What do sine waves have to do with real world audio signals?

Everything. Unless of course you think Fourier was wrong. Hint: He wasn't.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-26 20:13:46
Ethan, you need much longer analysis windows on your spectrum plots, especially those at low frequencies. Also, for higher frequencies you might use uniform frequency scales to make looking for harmonics easier.

I'm pretty much a beginner with FFT. Several people at Gearslutz told me that Sound Forge's FFT has fatal flaws, so I have since downloaded the latest Rightmark analyzer. I did indeed notice that the skirts are much steeper when viewing those same Wave files. I just wish the Rightmark analyzer would let me play with the parameters without having to close the Wave file and start all over again each time.

I'd like to learn more about FFTs. I'm not a math guy, so I imagine I'll never fully understand all the nuances. But I'd like to try anyway. Should I start a thread asking for advice here, or in the Scientific Discussion section? If you or others can direct me to a site that has a clear explanation using English descriptions more than math, that'd be most excellent. This page loses me on the very first sentence:

http://en.wikipedia.org/wiki/Fast_Fourier_transform (http://en.wikipedia.org/wiki/Fast_Fourier_transform)

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-03-26 20:50:52
Here's the sequence from my noise sums spreadseet, starting at 90 dB:

1 -86.990  (3.01 dB drop)
-85.229
-83.979
    .
    .
    .
--76.990
20 -76.778 dB  (ca. 13 dB drop total, but only .322 dB drop for the iteration.

There's actually a much quicker way to calculate it. If you combine the same level of noise (uncorrelated of course) 21 times then the final noise level is square root of 21 greater. The square root of 21 is 4.58, which is 13.22 dB, so the final SNR is 90 - 13.22 = 76.78.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-26 23:40:21
Should I start a thread asking for advice here, or in the Scientific Discussion section?
Scientific Discussion would be a great place for such a thread.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-27 01:18:04
What do sine waves have to do with real world audio signals?

Everything. Unless of course you think Fourier was wrong. Hint: He wasn't.

--Ethan


No. Fourier said that any audio signal can be analyzed as a series of sine waves. But that's an analysis, a model. It isn't the thing itself.

Yes, you can reconstruct the original waves from the series of sine waves if you have enough of them - but then what you have isn't really the sine waves any more, it's a complex waves.

It's like if you have a wooden box. The wooden box can be used to hold something, say, cookies.

The wooden box can be analyzed as a group of boards. But the separate boards aren't the box, they're a bunch of boards and they won't hold your cookies.

The analysis is not the thing itself. The deconstruction is not the thing itself. The properties of the whole are different from/greater than the properties of the parts.

Fourier was not wrong. But a lot of people think he said something that was not exactly what he said.


I'm sorry but that is a load of nonsense (I am being literal here, not attacking you). The fourier transform of something (eg an audio signal) is exactly equivalent to the signal. And your analogy is completely irrelevant.

If you wish to argue that looking at the response of a system to a sine wave is not enough, there's plenty of other places to look. The most obvious being that if the processing done to the signal isn't linear then the output of the system for a single sinewave input "at each frequency" isn't enough to be able to predict its behaviour for a superposition of many of these sines (ie for any signal).

edit: a very "audiophiley" argument as to why fourier transforming stuff doesn't tell you the full story just occured to me: yes, in theory, the signal and its transform are equivalent. but in practice, a) we use FFTs, which transform over a finite window, thus missing important information, and b) we use computers, with their "well-known" numerical inaccuracies. brilliant! (a load of rubbish actually, but that is the sort of thing that is hard to argue with online, because who's going to sit down and explain this stuff in detail without sounding like an arrogant jerk? so you'll win easily like this)
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-27 02:43:04
What do sine waves have to do with real world audio signals?

Everything. Unless of course you think Fourier was wrong. Hint: He wasn't.

--Ethan


No. Fourier said that any audio signal can be analyzed as a series of sine waves. But that's an analysis, a model. It isn't the thing itself.


What you don't seem to understand John is that the Fourier transform has a perfect inverse. The inverse transforms what you called an analysis back into the origional wave.

The Fourier transform looses no information no matter which direction the data goes. What you call an analysis  just transfoms the signal into an alternative form of the identical same data, 100.0000...% of it.

Quote
Yes, you can reconstruct the original waves from the series of sine waves if you have enough of them - but then what you have isn't really the sine waves any more, it's a complex waves.


Wrong again, The inverse transform reconstructs the identical origional wave. 

This basic priniciple can be used to create digital equalizers.  The input signal is transformed into a table of amplitudes and phase angles. You can adjust the ampludes and phrase angles as you wish. Adjusting the amplitudes is like a adjusting slider on a virtual graphic equalizer, orly the number of virtual sliders on the virtual graphic equalizer can be exceedingly large, so very fine adjustments are easy to make.  The table of revised amplitudes and phase angles is then transformed back into the original signal with the desired changes. Phase can be adjusted in a similar manner.  I have several of these tools, and I've been using them for years. They work very nicely, thank you!  It is a common feature of the better DAW  software.

If you perform forward and reverse transformations without any changes to the transformed data, the resulting wave looks and sounds exactly like the origional.

Therefore the coefficients of the sine and consine waves that the fourier transform creates are exact and reprsentative. Every real world wave can be transformed into an exactly representative colleciton of sine and cosine waves.

Furthermore, the Fourier transform is not a unique kind of thing, and audio data can be transformed by other means into other alternative forms. And back.

Title: AES 2009 Audio Myths Workshop
Post by: botface on 2010-03-27 10:03:26
But psychoacoustics have to come into it. EG if distortion below a certain level is inaudible a piece of gear with distortion below that threshold is transaparent as far as that parameter goes. Another piece of gear with more zeros in front of it in the spec isn't any more transparent.


I haven't said that. What's your point?

Going for only 80 dB is just so close to the absolute thresholds of human hearing, that it wouldn't take much effort to produce a positive ABX result for any device incapable of delivering more than that. Since the call was for a black box to determine transparency and not a question about what's sufficient for great music listening pleasure, I just found 80 dB to be not enough. Especially when taking into consideration how much more even budget gear is able to deliver nowadays.

Sorry if I misunderstood. I thought you were saying that we should simply go for state of the art as the baseline for defining acceptable performance.

Also, I interpreted your post as an attempt to get the discussion back on to something useful and was happy to respond but since then there's been another couple of pages of unrelated discussion so maybe this isn't the place to have a discussion
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-03-27 12:04:41
When you take a wooden box apart into a pile of boards it's a pile of boards. When you reassemble it into a box it's a box again. But a box is not a pile of boards and a pile of boards is not a box. The whole is greater than the sum of its parts.

You can take a Bach Fugue and analyze it into a progression of chords. You can analyze those chords into a bunch of notes. Yet the fugue is something more than the notes and the pile of individual notes is not music.

Yes, you can take things apart and put them back together. That proves - that you can take things apart and put them back together.


So hang on a second. Do you really believe that knowing the effect of a linear system on each single frequency mode is not enough to tell me how it will affect a sum of them? Or are you just making the point that when I am listening to music, the feelings (or whatever) it evokes are impossible to predict from looking at its spectrum? (which is an obvious statement, the relevance of which to studying the effect of a component on a signal I, however, have difficulty seeing).

OK, to clarify things: In "Fourier was not wrong. But a lot of people think he said something that was not exactly what he said.", could you please explain a) what a lot of people think he said, b) what he actually "said" on this matter?

Actually I am guessing you won't respond again (maybe I should get into more online arguments and insult people more to deserve a reply), but I am curious nonetheless.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-27 12:25:26
When you take a wooden box apart into a pile of boards it's a pile of boards. When you reassemble it into a box it's a box again. But a box is not a pile of boards and a pile of boards is not a box. The whole is greater than the sum of its parts.

You can take a Bach Fugue and analyze it into a progression of chords. You can analyze those chords into a bunch of notes. Yet the fugue is something more than the notes and the pile of individual notes is not music.

Yes, you can take things apart and put them back together. That proves - that you can take things apart and put them back together.


You are lacking some very basic understanding about the nature of observation. Fourier transformation is a fundamental concept of modern science. It scales to arbitrary precision and you are limited only by the precision of your measurement gear. Fourier transformation is used to describe systems magnitudes smaller and lightyears below any possible human threshold of perception.

The same applies to digital audio. Of course, bandwidth limiting for the sake of digitalization is a lossy process, but it can be scaled up to arbitrary amounts of information conservation. And in digital audio you just have a finite sequence of elements, that are put back together to reconstruct a signal at playback time. There has not been a single individual in the past, which could demonstrate under controlled and repeatable conditions, that this sum of discrete elements (e. g. at 44.1 kHz) was not sufficient to be indiscernible compared to the full signal passed over a straight wire. We have determined the resolution, that we need to be able to discern a finite set of quantizations from the original, and that's what is used today. You can rant as much as you want with your limited understanding (or even if you understood it in full), but your only way to make a point would be to show, that a given resolution (as 16 bit at 44.1 kHz) is inappropriate using real world converters. A simple way to provide such a proof would be a double blind test vs. a straight wire. Why don't you come back when you can provide that?
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-27 14:16:26
wrong again.  *sigh*.  A Fourier Transform in a bandwidth limited system is an approximation.


No, the Fourier Transform is mathematically exact.  Furthermore, bandwith limits are not part of the definition of the Fourier transform. The definition of the fourier transform is an intergral with infinite limits that also include as close as you can imagine coming to zero.

There is a limitataion on the Fourier transform that  can be stated many ways. Here I will state it as being that the signal has to be continuous. That means that at any point in time, the signal has only one value, but it does have that one value.  IOW, at no time is the signal undefined, nor is there any point in time where the signal has say, two values. In tjhe real world those are pretty easy restrictions to honor.

The restrictions of the Fourier Transform do not preclude the consideration of any singal that has a finite (but very, very, very large or very, very, small) bandwidth.  The Fourier transform relates to not only audio but also low frequncy signals like siesmic events, and very high frequency signals, such as TV, radar, light, X-rays, and even very high energy particle beams. 


Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-27 16:08:04
Fourier transformation is a fundamental concept of modern science. It scales to arbitrary precision and you are limited only by the precision of your measurement gear. Fourier transformation is used to describe systems magnitudes smaller and lightyears below any possible human threshold of perception.


Perhaps we should help some people out by pointing out that there are two Fourier Transforms. This fact detracts *nothing* from the validity of anything that people from the "pro sine wave" viewpoint have been saying in this thread. But it might help some people understand how they came to have the misapprehensions that they have.

One Fourier Transform in common use is the mathematical Fourier Transform that is as its name suggests purely mathematical. Being mathermatical, it is truely exact. The applicaiton of it that we are talking about is the transformation of a time-varying signal into a collection of sine and cosine waves, or if you will, a collection of sine waves and phase angles.  From the Fourier Transform we can unambigiously say that any real world audio signal can be exactly equated to a collection of sine waves and phase angles, and vice-versa.

The other Fourier Transform that is in common use is the Fast Fourier Transfom (FFT) which is a computational method that can be applied to any digital signal. Just like its mathematical namesake, it can be used to convert a digitized signal into a set of  sine/cosine pairs, or if you will an exactly equivalent collection of sine waves and phase angles. Being numeriical, it is not exact, but its precision can be made as good as you want it to by performing it with long data words.  Its frequency resolution can be made as fine or coarse as you would like by processing data in larger or smaller chunks. The Fast Fourier Transform is the same as the FFT that we commonly use for testing and measurements, as well as spectral adjustments in DAW software.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-27 17:24:46
they might have something to contribute in the area of understanding euphony.


This might be a slight misstatement - I think that what people are seeking is how to obtain euphony. Actual euphony is generally one of those things that you understand pretty quickly and intuitively when you hear it.

There *is* a teachable skill called listening for good sound quality. One widely-respected teacher of this skill can be found here: Learn how to listen for good sound quality (http://www.dlcdesignaudio.com/about-us.php) His students have probably designed more good-sounding audio systems than just about anybody else in the world.

Disclamer: This person is both a close friend and client.

When it comes to recordings, the recipie for good sound is pretty well-known, and not much of a mystery or a secret. You simply record good music played by good musicans who are managed by good managers in a good venue with good equipment, and with reasonable smarts, education and experience your recordings will sound good.  Leave any of those out, and it will proably not be so good.  The good education part is probably the element of the bunch that can be optional. If you have to make a choice between OJT and formal education, take the OJT.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-27 17:41:25
I just went through and recycled another half-dozen posts or more. There are still some border-line posts here. We have a choice: this can be a thread full of meaningless emotional language or this can be a thread full of meaningful technical discussion. Mixing the two is counter-productive.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-27 17:52:16
Here's the sequence from my noise sums spreadseet, starting at 90 dB:

1 -86.990  (3.01 dB drop)
-85.229
-83.979
    .
    .
    .
--76.990
20 -76.778 dB  (ca. 13 dB drop total, but only .322 dB drop for the iteration.

There's actually a much quicker way to calculate it. If you combine the same level of noise (uncorrelated of course) 21 times then the final noise level is square root of 21 greater. The square root of 21 is 4.58, which is 13.22 dB, so the final SNR is 90 - 13.22 = 76.78.


Thanks.

Audio calculations are IME usually quite simple but at least in my hands, frustratingly error-prone. I favor simple algorithms whose results I can look at while they operate over a goodly range of values.

This approach seems to work well in a context like this where it is important to make sense on the intuituve level.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-28 21:11:31
Fourier was of course correct  - anyone arguing against this marks themselves out as a fool.

But it would be equally foolish to claim that a system's response to one or two isolated sine waves tells us everything we need to know about that system because of Fourier.

The very things we're testing for (i.e. non-linear distortion) break the superposition principle - and IIRC ourier kind of needs superposition to work!

I'm sure someone will come along and say that in the equipment they're talking about, non-linear distortion is so low that it doesn't matter. I'm sure you're right - but it's a circular argument - if you know it's that low, why are you doing measuring it in the first place tests?

The conclusion of a circular argument may be correct, but it's not a very convincing way to get there.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-28 22:27:58
Fourier was of course correct  - anyone arguing against this marks themselves out as a fool.


Unfortunately, there is a lot of that going around here lately. L-(

Quote
But it would be equally foolish to claim that a system's response to one or two isolated sine waves tells us everything we need to know about that system because of Fourier.


That is, of course exactly right.

Quote
The very things we're testing for (i.e. non-linear distortion) break the superposition principle - and IIRC Fourier kind of needs superposition to work!


In the real world, every theory fails to work perfectly. Does this mean that we abandon each and every theory?

While *any* nolinearity keeps superposition from working perfectly, it seems fair to ask how much nonlinearity does it take to keep superposition from being a model that makes predictions that agree with the real world well enough for the model to remain useful.

Quote
I'm sure someone will come along and say that in the equipment they're talking about, non-linear distortion is so low that it doesn't matter.


Been there, done that. But that needs to be based on actual knowlege about how linear the equipment actually is. That should be based on more than measurements on just one or two isolated frequencies. In general, we know that nonlinear distortion tends to be highest at the extremes of the audible range.

Quote
I'm sure you're right - but it's a circular argument - if you know it's that low, why are you doing measuring it in the first place tests?


The *it* is nonlinear distortion, but nonlinear distortion is not what we are measuring at the moment. So there is no circularity.

In the real world, one makes a few nonlinear distoriton measurements. For example, one at 20 Hz, one at 1 KHz, and one at 20 KHz. Or in modern times one runs an IM sweep or a complex multitone. If this is fairly good equipment then we find that the nonlnear disortion is less than 0.02% at all 3 frequencies for for any of the tests. It might be less than 0.0005%. 

What do we say about how superposition works in a system where alll nonlinear distortion appears to be less than 0.02% or 0.0005% at all frequencies from 20-20 KHz?


Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-29 10:03:56
What do we say about how superposition works in a system where alll nonlinear distortion appears to be less than 0.02% or 0.0005% at all frequencies from 20-20 KHz?
Assuming no weird ultrasonic stuff, we say that's just fine.

You're inching towards defining these measurements and thresholds that define transparency - but if you're going to drop one in once every 16 pages, I'm not sure I've got the patience to hang around until the end!

Is anyone going to be brave and list them properly? For each one we need: stimulus, analysis, thresholds.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: John Eppstein on 2010-03-29 12:30:40
In the real world, one makes a few nonlinear distoriton measurements. For example, one at 20 Hz, one at 1 KHz, and one at 20 KHz. Or in modern times one runs an IM sweep or a complex multitone. If this is fairly good equipment then we find that the nonlnear disortion is less than 0.02% at all 3 frequencies for for any of the tests. It might be less than 0.0005%. 

What do we say about how superposition works in a system where alll nonlinear distortion appears to be less than 0.02% or 0.0005% at all frequencies from 20-20 KHz?

3 frequencies aren't enough for rigorous testing, although they're fine it all you're doing is writing sales literature. If you're really trying to find out what the gear is like you should run tests at octave intervals or better.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-29 12:30:48
What do we say about how superposition works in a system where alll nonlinear distortion appears to be less than 0.02% or 0.0005% at all frequencies from 20-20 KHz?
Assuming no weird ultrasonic stuff, we say that's just fine.

You're inching towards defining these measurements and thresholds that define transparency - but if you're going to drop one in once every 16 pages, I'm not sure I've got the patience to hang around until the end!


I gave one of he best answers you'll get this week a few pages back. RMAA test suite, and all noise and distortion >80 dB down, response within +/- <0.1 dB. It os probably still overkill in many cases, but it is really pretty good.

Here's your challenge: Find some piece of regular audio gear, find that it passes the RMAA test suite and fails an ABX test. No tricks, no made up pathological signals, do use regular commercial recordings, no hypothetical situations, no broken equipment, no bad gain staging and no equipment abuse. Something that would actually happen in a regular audio production or playback situation.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-29 13:43:42
and all noise and distortion >80 dB down,


Something that would actually happen in a regular audio production or playback situation.


A 12 track production of a slow, contemporary classical piece. Each microphone is connected via a cheap DAC with an uncorrelated noise floor of -81 dB. The resulting mix would have a -45 dB noise floor, clearly audible during each silent moment, and I would be ashamed to deliver something like that to a client.

That's, of course, only the case if you don't use the "passing of a RMAA test" as a submarine argument. If you were meaning by "passing" only excellent scores, for example, that would imply any noise being down much further than 80 dB.

That's why I have repeatedly underlined: don't go too cheap, when defining transparency requirements for studio gear. It is not necessary, if you want to make a point against fetish level requirements. Nowadays you get complete converter cards with a SNR of 107 dB for $50 and less.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-29 14:52:12
Can't edit anymore, but I think it should be -48 dB resulting noise floor (with no difference in meaning).
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-29 14:54:54
Googlebot, you've made a math error. Every doubling of inputs increases the noise floor 3 dB. The noise floor in your scenario would be around -70 dB. Also realize that ambient noise in the room will probably be -60 dB and that's correlated noise (6 dB per doubling) giving you a net -40 dB acoustic noise floor.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-29 14:59:17
I calculated input number ((((((((((1+2)+3)+4)+5)+6)+7)+8)+9)+10)+11)+12 and thought that would mean you add 11 times 3 dB. How would it be done right?

Edit: Ok, got it myself. ((((((((((1+2)+3)+4)+5)+6)+7)+8)+9)+10)+11)+12 doesn't sum equal amounts of noise in all but the first step. If one cascades the summing it is even easier to see, why it was flawed. So you are right.

Also realize that ambient noise in the room will probably be -60 dB and that's correlated noise (6 dB per doubling) giving you a net -40 dB acoustic noise floor.


That should only be true for concurrent recording, shouldn't it?
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-29 17:13:15
Here's your challenge: Find some piece of regular audio gear, find that it passes the RMAA test suite and fails an ABX test. No tricks, no made up pathological signals, do use regular commercial recordings, no hypothetical situations, no broken equipment, no bad gain staging and no equipment abuse. Something that would actually happen in a regular audio production or playback situation.
Surely RMAA would identify broken equipment anyway?

What about commercial recordings of pathological signals?

If not, why not?

It's a relevant point: HA exists because Dibrom was so disappointed with mp3's treatment of his favourite music - signals that many people wouldn't count as "music", and which codec developers had never tried.


Let's face it - we don't need RMAA following your post - with enough restrictions, and enough opportunities to cry "foul, that's not normal use", almost any piece of half-decent equipment can't be ABXed.


However, it's a fair start. But I still haven't found a list of the stimuli used in the RMAA test - am I going to have to run the thing, listen to the signals, and write them down?

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-29 17:21:14
and all noise and distortion >80 dB down,


Something that would actually happen in a regular audio production or playback situation.


A 12 track production of a slow, contemporary classical piece. Each microphone is connected via a cheap DAC with an uncorrelated noise floor of -81 dB. The resulting mix would have a -45 dB noise floor, clearly audible during each silent moment, and I would be ashamed to deliver something like that to a client.


That's interesting because if each input to a real world 12 channel mixer that was being use to record a live performance had even 80 dB SNR, it would be an unusually wonderful day. 

The mixer electronics itself may have 90-100  dB SNR, but once you hook up the mics, their noise  (usually equivalent to 10-20 dB SPL)  and the room noise they pick up (usually 25-35 dB SPL)  usually pushes the noise floor up quite a bit. 

I do 2 channel live recordings all the time with no mixer at all, and the recordings generally have about 70 dB SNR, +/- 5 dB.  So do just about everybody else's.  So, that would be the SNR for each chanel at the summing junction of a hypothetical mixer.

When I do multi-micing of live performances, each channel follows a similar pattern.

What the above analysis of yours did not consider is that each channel fader in the mixer would normally be set for some significant amount of attenuation which attenuates both the signal and the noise. 

This has to be true because the clipipng point for each input channel on a mixer is usually about the same as the clipping point for the entire mix.  The only way you can sum 12 things and have the same maximum level as any of them is to attenuate all of them.


Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-29 17:30:24
Here's your challenge: Find some piece of regular audio gear, find that it passes the RMAA test suite and fails an ABX test. No tricks, no made up pathological signals, do use regular commercial recordings, no hypothetical situations, no broken equipment, no bad gain staging and no equipment abuse. Something that would actually happen in a regular audio production or playback situation.
Surely RMAA would identify broken equipment anyway?

What about commercial recordings of pathological signals?

If not, why not?


Depends what you mean.

The origional Telarc 1812 was an example in that day of a commerical recording of a pathological signal that was created by more-or-less natural means. I would consider it to be a reasonable recording to use to do listening tests of equipment, and in fact I've done just that.

I think that the sort of pathological reocordings that I've seen used to break MP3 coders are also legitimate to use to test general kinds of audio gear.

Quote
It's a relevant point: HA exists because Dibrom was so disappointed with mp3's treatment of his favourite music - signals that many people wouldn't count as "music", and which codec developers had never tried.


I'm good with that sort of thing.


Quote
Let's face it - we don't need RMAA following your post - with enough restrictions, and enough opportunities to cry "foul, that's not normal use", almost any piece of half-decent equipment can't be ABXed.


I think you're taking what I said way outside of its intended meaning.

Quote
However, it's a fair start. But I still haven't found a list of the stimuli used in the RMAA test - am I going to have to run the thing, listen to the signals, and write them down?


That's pretty much what I've had to do. Except the signals were too complex for me to identify by ear, so I used various standard audio analysis tools.

You could write the author - I believe he's a good guy.

I find it amusing that so many people complain so hard about doing things that I have done as a matter of course!

Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-29 17:41:22
Noise growth.

Each DOUBLING of inputs added AT THE SAME LEVEL adds 3dB to the noise level, for decorrelated (i.e. thermal, or the like) noise.

2 inputs -3dB to SNR
4 inputs -6
8 intputs -9
16 inputs -12

This is true if both INPUTS and NOISE are decorrelated.

However, if we have the same signal in all the mikes, now the signals add in amplitude vs. noise in power.  Then, you get exactly the opposite growth, instead of -3dB you get +3dB, and so on.

This case is true, for example, in things like array mikes. It's not usually a factor in mixing applications.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-29 17:52:06
I find it amusing that so many people complain so hard about doing things that I have done as a matter of course!
I'm allowed to complain!

Anyway, this thread started (many pages ago  ) with a discussion about these characteristics that define an audio system. It moved onto measurements, but no one has a list.

Surely it knocks people's confidence in the whole thing when various people imply there are measurements which pretty much guarantee transparency, but in 16 pages no one can actually come up with a list?!

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-29 17:56:08
I find it amusing that so many people complain so hard about doing things that I have done as a matter of course!
I'm allowed to complain!

Anyway, this thread started (many pages ago  ) with a discussion about these characteristics that define an audio system. It moved onto measurements, but no one has a list.

Surely it knocks people's confidence in the whole thing when various people imply there are measurements which pretty much guarantee transparency, but in 16 pages no one can actually come up with a list?!



You've had a list as long as you've had RMAA to run!

I also referenced you to an archived copy of my PCAVTech web site, which had that list in every test report I posted.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-29 18:17:25
3 frequencies aren't enough for rigorous testing,


I agree that 3 frequencies aren't enough if you are starting with a blank sheet of paper.

Thing is, we've been testing audio gear pretty intensively for many decades -  5 more or less.  The sheet of paper isn't blank for many kinds of audio gear. It is very rare to find say a power amplifier whose frequency response can't be accurately characterized with just 3 measruements.  Ditto for nonlinear distortion. If you are a little paranoid - run a sweep or a multitone. No biggie!  At PCAVTech I ran a multitone, and RMAA runs a sweep.
Title: AES 2009 Audio Myths Workshop
Post by: Ethan Winer on 2010-03-29 18:43:58
Each DOUBLING of inputs added AT THE SAME LEVEL adds 3dB to the noise level

Yes, though few multi-track mixes have all tracks playing at full volume. Much more typical is to record each instrument at full volume for best s/n. Then when mixing, later many of those tracks will be lowered. When I used to mix full time professionally in the 1970s and 80s, often the bass and kick drum were the loudest instruments! A little treble roll-off on playback would reduce the tape hiss nicely. Then many of the other instruments would be mixed in at a lower level, so it's not like 16 or 24 tracks are all adding noise at full scale.

--Ethan
Title: AES 2009 Audio Myths Workshop
Post by: Woodinville on 2010-03-29 23:23:23
Noise growth.

Each DOUBLING of inputs added AT THE SAME LEVEL adds 3dB to the noise level, for decorrelated (i.e. thermal, or the like) noise.

2 inputs -3dB to SNR
4 inputs -6
8 intputs -9
16 inputs -12

This is true if both INPUTS and NOISE are decorrelated.

Whoa there, that's wrong, chief.

If both input and noise are decorrelated, the overall SNR will not change, although the noise floor and signal floor will both rise.

My comment above is true if and only if you have one signal and 'n' channels of silence.
Quote
However, if we have the same signal in all the mikes, now the signals add in amplitude vs. noise in power.  Then, you get exactly the opposite growth, instead of -3dB you get +3dB, and so on.

This case is true, for example, in things like array mikes. It's not usually a factor in mixing applications.


My coffee must have sunk in before I wrote the second part. It's right.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-30 00:56:37


In much the way that the argument "if God had wanted man to fly, he'd have given him wings"...

If God had got the additive quality of recorded noise right, we wouldn't have had to invent GATES.


It is VERY COMMON in multitrack mixing to employ muting and gating on source tracks, due to the objectionable buildup of noise.  (not just skroinks and breath noise, and chair squeaks, either...but noise) 


...remember, Ethan...compressors are in rampant use, and they raise the noise floor!
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 01:47:10
In much the way that the argument "if God had wanted man to fly, he'd have given him wings"...

If God had got the additive quality of recorded noise right, we wouldn't have had to invent GATES.


I don't think that anybody is seriously disputing that.

The quesiton is: "Where does the noise come from?"

The answer for equipment of even modest quality is: "The room the recording is made in".

Quote
It is VERY COMMON in multitrack mixing to employ muting and gating on source tracks, due to the objectionable buildup of noise.  (not just skroinks and breath noise, and chair squeaks, either...but noise)


So Dwoz you're saying that in all of your years of studio experience, you have never noticed that the most common source of background noise in your recordings was the room?

<Mental image of Dwoz spending the big bucks for a Millenia Multimedia Mic Preamp and an Apogee Rosetta ADC to reduce background noise!>

Title: AES 2009 Audio Myths Workshop
Post by: Iain on 2010-03-30 03:00:44
The answer for equipment of even modest quality is: "The room the recording is made in".


I think it is important to talk about the frequency ranges when we talk about noise. Room noise, in my experience, is predominantly a mid-low frequency phenomenon, whereas electronic noise, tends to be 'white', and is most problematic at high frequencies. So a high level of room noise may not mask electronic hiss.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 03:17:01
The answer for equipment of even modest quality is: "The room the recording is made in".


I think it is important to talk about the frequency ranges when we talk about noise. Room noise, in my experience, is predominantly a mid-low frequency phenomenon, whereas electronic noise, tends to be 'white', and is most problematic at high frequencies. So a high level of room noise may not mask electronic hiss.


Actually, the ear is most sensitive around 4 KHz, which is pretty close to midrange, and therefore neither a high or low frequency.  The spectral content of electronic noise is difficult to generalize on, because it has different shapes depending on types of components and even specific examples.  In short, your generalities are highly hedged and not generally relevant to any particular situation. If HVAC noise due to turbulent air flow is audible, it often is actually pretty hissy. 

If you are actually doing real world recording, about the only time you hear electronic noise from the mic preamp is when the mic is disconnected or in the case of a condensor mic, the phantom power is turned off. Simply disconnecting a mic is not a reliable indicator of the mic preamp's noise because the input termination is a strong determining factor. Make the mic operational, and in almost every circumstance, "room tone" noise will dominate. The rest of the recording system will generally be free of audible noise unless you turn some gain control way up.

One of the mics I use quite frequently is the Rode NT4 which is alleged to be one of the quietest mics around. Since it is a condensor mic, its output is quite high, and as usual, room tone dominates its background noise in actual use.
Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-03-30 03:30:15
Indeed, unless you are a perceptual coding expert, it is dangerous to assume that a louder sound will mask a quieter one. The original question here is whether performance of modern digital audio electronics can be a limiting factor in recording quality. Because in some recordings, room noise is considered to be part of the program material, and because analog electronics with >100 dB S/N exist, the answer to that question has to be "yes".
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 03:39:33
Indeed, unless you are a perceptual coding expert, it is dangerous to assume that a louder sound will mask a quieter one. The original question here is whether performance of modern digital audio electronics can be a limiting factor in recording quality. Because in some recordings, room noise is considered to be part of the program material,


Looks to me like a claim that in every case room noise is 90 or more dB down, and/or has a spectral balance that is violently different from that of electronics.  Just 'taint so!

Quote
and because analog electronics with >100 dB S/N exist, the answer to that question has to be "yes".


Hmm, so common 24 bit digital interfaces with => 110 dB SNR have been excluded from the discussion?

Seems strange, given that even an alleged bottom feeder like me has several of them.

I'm sticking to my story - even with legacy 16 bit converters, I'm getting very clean recordings of the room tone.

Feel free to post your own recordings of exceptions.
Title: AES 2009 Audio Myths Workshop
Post by: Iain on 2010-03-30 04:32:29
Because in some recordings, room noise is considered to be part of the program material, and because analog electronics with >100 dB S/N exist, the answer to that question has to be "yes".


That is an important point. Room 'noise' may be signal and therefore should not be factored into the debate. For example, if I am recording some office ambience for a TV show/movie, the room's 'noise' is the signal.

I'm not making the call that modern audio electronics are/are not good enough, just saying that you can't use room noise an easy out.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 12:08:22
Because in some recordings, room noise is considered to be part of the program material, and because analog electronics with >100 dB S/N exist, the answer to that question has to be "yes".


That is an important point. Room 'noise' may be signal and therefore should not be factored into the debate. For example, if I am recording some office ambience for a TV show/movie, the room's 'noise' is the signal.

I'm not making the call that modern audio electronics are/are not good enough, just saying that you can't use room noise an easy out.


There is typically  25-30 dB between room noise and the electronics, and that is true even when you restrict yourself to just 16 bit electronics.

Room noise isn't an out, it is a brick wall that is right there in your face.

This isn't just theory. I got very interested in this last year and took a PC with a  ca. 110 dB SNR/DR 24 bit card on site a number of times. I had several opportunities to take data in very quiet rooms.
Title: AES 2009 Audio Myths Workshop
Post by: malice on 2010-03-30 12:33:07
Because in some recordings, room noise is considered to be part of the program material, and because analog electronics with >100 dB S/N exist, the answer to that question has to be "yes".


That is an important point. Room 'noise' may be signal and therefore should not be factored into the debate. For example, if I am recording some office ambience for a TV show/movie, the room's 'noise' is the signal.

I'm not making the call that modern audio electronics are/are not good enough, just saying that you can't use room noise an easy out.


There is typically  25-30 dB between room noise and the electronics, and that is true even when you restrict yourself to just 16 bit electronics.

Room noise isn't an out, it is a brick wall that is right there in your face.

This isn't just theory. I got very interested in this last year and took a PC with a  ca. 110 dB SNR/DR 24 bit card on site a number of times. I had several opportunities to take data in very quiet rooms.



20 dB(A) as noise level is not uncommon in recording rooms. Now when you use 24 bits format, it's more about security and headroom. You can peak at -10 to stay on the safe side, and that is the main reason this word lenght makes sense for recording music.

Now on the delivery side, the most dynamic orchestra should be what ... Around 60 dB, so yes, 16bits makes sense.

Few professionals are arguing that 16bits is not adequate as delivery format. So far, the 24bits/96kHz attempts when down to the toilets.

malice
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-30 12:39:24
Few professionals are arguing that 16bits is not adequate as delivery format. So far, the 24bits/96kHz attempts when down to the toilets.
There have been successful ABXes of mixed/mastered 16-bit vs. 24-bit on these very forums on moderately dynamic orchestral content. While I'd tend to agree with you that 16-bit audio is adequate for delivery, it is not sonically transparent for all listeners in some cases.
Title: AES 2009 Audio Myths Workshop
Post by: malice on 2010-03-30 12:40:16
And by the way, the noise floor in analog is just not the same as in digital.

The tape hiss is far more acceptable to our ears than quantification noise.

That explain as well the need of staying on the safe side. 16bits is the minimum workable format in digital. Analog doesn't need dolby SR to be workable.

malice
Title: AES 2009 Audio Myths Workshop
Post by: malice on 2010-03-30 12:46:42
Few professionals are arguing that 16bits is not adequate as delivery format. So far, the 24bits/96kHz attempts when down to the toilets.
There have been successful ABXes of mixed/mastered 16-bit vs. 24-bit on these very forums on moderately dynamic orchestral content. While I'd tend to agree with you that 16-bit audio is adequate for delivery, it is not sonically transparent for all listeners in some cases.


I agree with this but:

1) I wouldn't change my classical CD collection for that reason only, so the reason it was not a success was commercial. Not good enough to justify the expense

2) Pop/Rock/dance/rap etc : 16bit migh sometimes be already to much. (big tongue in cheek comment here )


malice
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 12:59:48
And by the way, the noise floor in analog is just not the same as in digital.

The tape hiss is far more acceptable to our ears than quantification noise.


Wrong again!

The fact of the matter is that tape hiss is defined by classic and well-understood physical processes. Besides being about 40 dB worse than the best of modern digital (!!!!), it also is simply what it is, and there is very little that hasn't already been done about it yet to be done. We can't massage tape hiss so that its spectral contents fall where the ear is least sensitive. Everybody with a brain gave up developing analog tape technology about 30 years ago because it was a technological black hole, just like vinyl.

In contrast, so-called noise shaping of quantization (note the use of the proper word) errors can be pretty much whatever we want it to be.

First off, digital quantization noise can be as small as we want it to be. The current limits to the noise floor of ADCs is set in the analog domain. The last time there was a significant improvement in ADC performance a whole new generation of op amps had to be developed so that the new ADCs could even be measured.

Secondly, spectral shaping of quantization noise is itself a very mature technology that just works. While some may dispute claims that spectral shaping of quantification noise of a 16 bit system can be made the perceptual equivalent of 120 dB SNR. tremendous improvements are possible. In fact there isn't a lot of worry about spectral shaping of the quantization noise of the best ADC chips because their noise is so rediculously low compared to the environment that they work in.

When you're comparing tape hiss to the noise floors of the best digital systems, it is a comparison between the tape hiss you hear, and effectively silence. Letsee we set the listening level to say 100 dB, and the digital noise is 20 dB below the threshold of hearing in the quietest room on earth. There isn't even a comparison!
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-30 13:04:14
And by the way, the noise floor in analog is just not the same as in digital.

The tape hiss is far more acceptable to our ears than quantification noise.
Wrong again!
Not just wrong, in violation of Term of Service 8. You're making (disputed) claims about audio quality without any kind of scientific evidence.
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-30 13:10:59
There have been successful ABXes of mixed/mastered 16-bit vs. 24-bit on these very forums on moderately dynamic orchestral content. While I'd tend to agree with you that 16-bit audio is adequate for delivery, it is not sonically transparent for all listeners in some cases.


Do you have a link? I find this somewhat questionable. Except when you turn up the volume so high, that it would damage your ears during full scale passages.
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-03-30 13:20:02
Do you have a link? I find this somewhat questionable. Except when you turn up the volume so high, that it would damage your ears during full scale passages.
http://www.hydrogenaudio.org/forums/index....st&p=610558 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=610558)

I was quite skeptical at first as well. My friend (and fellow member) Case succeeded as well, and I trust him, so I accept their claims. The thread is long, but we get it sorted out in the end.
Title: AES 2009 Audio Myths Workshop
Post by: malice on 2010-03-30 13:36:56
And by the way, the noise floor in analog is just not the same as in digital.

The tape hiss is far more acceptable to our ears than quantification noise.
Wrong again!
Not just wrong, in violation of Term of Service 8. You're making (disputed) claims about audio quality without any kind of scientific evidence.


Ok, fair enough.

May I rephrase this :

Would any of you make a record with 10bits converters ?

We can make a test if you want, let's reduce the word lenght of 16bits tracks to 10 bits and let's see if it's workable ?

malice
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 13:55:38
Do you have a link? I find this somewhat questionable. Except when you turn up the volume so high, that it would damage your ears during full scale passages.
http://www.hydrogenaudio.org/forums/index....st&p=610558 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=610558)

I was quite skeptical at first as well. My friend (and fellow member) Case succeeded as well, and I trust him, so I accept their claims. The thread is long, but we get it sorted out in the end.


The 24/96 sample (96k24b.wav) has a maximum peak amplitude that is about 20 dB (more than 3 bits) below FS.  This shifts the comparison from a purported comparison of 16 bits versus 24 bits to an actual comparison of a little less than 13 bits to almost 21 bits.  This example totally fails the criteria of demonstrating good gain staging.

The sustained low level passage in 24.wav peaks at around 40 dB below FS...

Where things get strange is when the auditions are not done using equipment that is really clean below the 16 bit level.  I've always done my work with equipment that had SNR and DR of > 100 dB right up to my ears. Right now my daily driver for routine tests has almost 110 dB DR and SNR,
Title: AES 2009 Audio Myths Workshop
Post by: googlebot on 2010-03-30 13:57:21
I was quite skeptical at first as well. My friend (and fellow member) Case succeeded as well, and I trust him, so I accept their claims. The thread is long, but we get it sorted out in the end.


While I believe that your trust is justified, I wouldn't exactly call a 5.9% score quotable. For such a hotly debated issue, with all the inflow "high rez" bullshit, that Hydrogenaudio has to fight off each year, I think it is not to much to ask to provide a ~0% result. Especially if we have the situation of a respected member being sure, that he can hear a difference. As another member put it: Why not "go the extra mile"?
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-30 13:58:54
Do you have a link? I find this somewhat questionable. Except when you turn up the volume so high, that it would damage your ears during full scale passages.
http://www.hydrogenaudio.org/forums/index....st&p=610558 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=610558)

I was quite skeptical at first as well. My friend (and fellow member) Case succeeded as well, and I trust him, so I accept their claims. The thread is long, but we get it sorted out in the end.
After hanging in there for a long time (it's a very interesting thread!), I missed the end. It seems Case ABXed the file provided by KikeG - that wasn't standard noise shaped dither. He was the only person to report any result (positive or negative) from that file. It's probably still valid, but it's not quite the confirmation people were looking for.

We got very close in that thread - but unless someone else can reproduce the results, or Martin can reproduce them on something other than his laptop's built-in sound card, then this isn't much of a result.

Files are here:
http://www.hydrogenaudio.org/forums/index....st&p=626692 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=626692)

Please post any results in that thread, not here.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 14:41:04
May I rephrase this :

Would any of you make a record with 10bits converters ?


Irrelevant question. Completely OT.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-03-30 14:50:28
Files are here:
http://www.hydrogenaudio.org/forums/index....st&p=626692 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=626692)

Please post any results in that thread, not here.


The 24/96 file has such low maximum peak levels that it nets out to be a comparison of less than 13 bits to more than 20 bits.  It's a little bit more relevant than the recent proposal of 10 bits versus 16, but not much.

The 24/48 file has almost 20 seconds of music where the average RMS level is more than 35 dB down.  Given that we have ABX tools that allow the listener to select subsets of test files, this file is too easy to mistakenly slip into a comparison of less than 10 bits to more than 17.  So now we do have a comparison of 10 bits versus 16 (or more)!

Until adequate safeguards are in place in tests involving the second file, we have no relevant test possible with these files on the grounds of a real and present danger of improper gain staging. The first file cannot be helped by controlling listening test procedures. It is effectively moot. The second file could be used with appropriate safeguards that would control which subsets of it were used.

Title: AES 2009 Audio Myths Workshop
Post by: malice on 2010-03-30 14:51:02
May I rephrase this :

Would any of you make a record with 10bits converters ?


Irrelevant question. Completely OT.


ok (http://www.hydrogenaudio.org/forums/index.php?showtopic=79865)

malice
Title: AES 2009 Audio Myths Workshop
Post by: krabapple on 2010-03-30 21:57:12
Do you have a link? I find this somewhat questionable. Except when you turn up the volume so high, that it would damage your ears during full scale passages.
http://www.hydrogenaudio.org/forums/index....st&p=610558 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=610558)

I was quite skeptical at first as well. My friend (and fellow member) Case succeeded as well, and I trust him, so I accept their claims. The thread is long, but we get it sorted out in the end.



Looks like some questions still remain at the end there as to proper dithering.

And I forget, were Case's results obtained exclusively with headphones?
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-03-31 14:52:14
Files are here:
http://www.hydrogenaudio.org/forums/index....st&p=626692 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=49843&view=findpost&p=626692)

Please post any results in that thread, not here.


The 24/96 file has such low maximum peak levels that it nets out to be a comparison of less than 13 bits to more than 20 bits.  It's a little bit more relevant than the recent proposal of 10 bits versus 16, but not much.

The 24/48 file has almost 20 seconds of music where the average RMS level is more than 35 dB down.  Given that we have ABX tools that allow the listener to select subsets of test files, this file is too easy to mistakenly slip into a comparison of less than 10 bits to more than 17.  So now we do have a comparison of 10 bits versus 16 (or more)!

I agree with all you say here Arny. Mayer and Moran have already proved (if any proof were needed) that 16-bits is insufficient if you let people do silly things with the gain that mean you're boosting the noise by 10-20dB above "deafening" level for full scale, and hence effectively testing 14, 13, or even 12-bit, rather than 16.

Three thoughts though:
1. It shows 16-bit is just enough for final delivery - arguably any less would be insufficient in a tiny amount of cases (though with optimum noise shaping I'm not convinced there's any case where this is true)
2. 16-bits may be insufficient if you are going to pass the signal through a subsequent stage of dynamic range compression
2. Most people arguing for greater than 16-bits claim it benefits all material, not just that which "almost" needs those extra bits - so any source material (which peaks above -1dBFS) is a fair (or at least, arguably potentially useful) test.

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-03-31 17:06:21


There seems to be a lot of confusion about the difference between bit depth and intensity. 

Then again, previous posts have SWORN that multiplication operations within digital systems are linear.  Are you suggesting that they are mistaken?

If linear, then a series of up/down intensity changes (gain changes) within the digital realm, will null.  If not linear, then it will not null.

Title: AES 2009 Audio Myths Workshop
Post by: pdq on 2010-03-31 17:45:10
There seems to be a lot of confusion about the difference between bit depth and intensity. 

Then again, previous posts have SWORN that multiplication operations within digital systems are linear.  Are you suggesting that they are mistaken?

If linear, then a series of up/down intensity changes (gain changes) within the digital realm, will null.  If not linear, then it will not null.

As long as no rounding/truncation takes place then yes, complementary changes in gain will null.

If you round or truncate then you have introduced an inaccuracy, whether or not the operation was linear.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-04-01 02:41:50
There seems to be a lot of confusion about the difference between bit depth and intensity. 

Then again, previous posts have SWORN that multiplication operations within digital systems are linear.  Are you suggesting that they are mistaken?

If linear, then a series of up/down intensity changes (gain changes) within the digital realm, will null.  If not linear, then it will not null.

As long as no rounding/truncation takes place then yes, complementary changes in gain will null.

If you round or truncate then you have introduced an inaccuracy, whether or not the operation was linear.


If you introduce an inaccuracy, then the operation is not linear.

Can you, say, attenuate the level by .775 percent, or any other arbitrary percent, and NOT round or truncate?
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-04-01 11:06:59
There seems to be a lot of confusion about the difference between bit depth and intensity. 

Then again, previous posts have SWORN that multiplication operations within digital systems are linear.  Are you suggesting that they are mistaken?


I don't know that any such posts actually exist. Why don't you quote one?

Of course, multiplication is  not a linear operation. High school algebra, no?

I suspect that multiplication was not what was being talked about.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-04-01 11:15:31
If you introduce an inaccuracy, then the operation is not linear.


If you wish to unrealisitically demand absolute perfection, then everything in the real world is crap.

Analog, being generally more imperfect and inherently less perfectible than digital, is thus according to you even worse crap.

Quote
Can you, say, attenuate the level by .775 percent, or any other arbitrary percent, and NOT round or truncate?


In the digital domain we can attenuate in the real world with whatever level of precision we choose to pay for. For example, there are such things as arbitrary precison numerical processors where you can do digital arithmentic with a thousand or perhaps even a million decimal places.  But according to the standards you demand, it isn't good enough because it is fatally flawed by being nonlinear. There is a little rounding in the millionth decimal place!

In the analog domain there are hard limits rooted in the laws of physics. In the digital domain I can easily attenuate a signal by 0.00001 dB. Show me how to do that in the analog domain.
Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-04-01 13:30:23
In the digital domain I can easily attenuate a signal by 0.00001 dB. Show me how to do that in the analog domain.
Never mind that - show me how I can be sure that I don't do that by accident in the analogue domain!

Cheers,
David.

Title: AES 2009 Audio Myths Workshop
Post by: Notat on 2010-04-01 14:15:29
2. 16-bits may be insufficient if you are going to pass the signal through a subsequent stage of dynamic range compression

How I wish we'd get to a place where this was necessary for commercial music. I'll point out that it is something that is commonly done by the AVR when people watch movies at home.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-04-02 02:19:03
You have such a nice tone, Arny.  Smooth.  Makes your partner feel warm and loved.  I bet you have to turn women down on a regular basis.

Multiplication is PERFECTLY linear, except for that pesky thing, the irrational numbers...which happen to be a bit of a mutt in the purebred show, when you're talking about discrete intervals.


Yes, multiplication is a linear operation. And personal attacks have no place in a technical and/or civilized discussion, no matter who starts them. They just dilute the information content of the forum.
Title: AES 2009 Audio Myths Workshop
Post by: dwoz on 2010-04-02 03:11:07
In the digital domain I can easily attenuate a signal by 0.00001 dB. Show me how to do that in the analog domain.


Simple.  Rewind. Open the refrigerator. hit play.
Title: AES 2009 Audio Myths Workshop
Post by: Arnold B. Krueger on 2010-04-02 12:04:58
Multiplication is PERFECTLY linear, except for that pesky thing, the irrational numbers...which happen to be a bit of a mutt in the purebred show, when you're talking about discrete intervals.

Yes, multiplication is a linear operation. .


Multiplication is either linear or nonliner in terms of its effects on audio.

If you multiply the signal by an independent number, such as setting gain, then the effects of multiplcation is linear.

If you multiply the signal by the signal itself, such as hat happens when there is amplitude modulation distortion, then the effect of multiplication is nonlinear.

People have been talking about the signal modulating itself here lately, and this is of course an example of nonlinear distoriton.

Title: AES 2009 Audio Myths Workshop
Post by: 2Bdecided on 2010-04-02 13:35:50
linear + a bit of noise

(and in audio, it's useful to think that way - rather than to say any system which adds noise is non-linear - even though mathematically this is the case)

Cheers,
David.
Title: AES 2009 Audio Myths Workshop
Post by: aclo on 2010-04-02 18:01:40
Multiplication is PERFECTLY linear, except for that pesky thing, the irrational numbers...which happen to be a bit of a mutt in the purebred show, when you're talking about discrete intervals.

Yes, multiplication is a linear operation. .

Multiplication is either linear or nonliner in terms of its effects on audio.
If you multiply the signal by an independent number, such as setting gain, then the effects of multiplcation is linear.
If you multiply the signal by the signal itself, such as hat happens when there is amplitude modulation distortion, then the effect of multiplication is nonlinear.

Well I won't enter into a contest of word redefinition, but let it be known that the first quoted line is most definitely not mine! Only the second is...
Title: AES 2009 Audio Myths Workshop
Post by: ExUser on 2010-04-02 18:10:21
Well I won't enter into a contest of word redefinition, but let it be known that the first quoted line is most definitely not mine! Only the second is...
Indeed. Please don't be so disingenuous, Arnold.