Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AES 2009 Audio Myths Workshop (Read 166188 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

AES 2009 Audio Myths Workshop

Reply #25
While I'm here, I may as well gore one of your sacred cows, and get myself on the "enemies" list.

Too bad that discussion of DBT has always been allowed here. You're not a vigilante just yet.

Quote
DBT.  What a wonderfully mis-understood thing she is!  Double blind Tests are a great, wonderful technique.  But employ them within a poorly-designed test, and you will have bad data with high confidence.  Simply having a double blind test does not mean your results will mean ANYTHING.  This is another myth that needs debunking!

Eh? Who is actually arguing here the benefit of poorly designed tests? Or are you trying to set up a strawman?



I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

It is very common to see inexperienced people trot out DBTs as if they irrevocably prove something in a definitive way.  Ethan would be one of these people.  Just the simple fact that the DBT tool was involved, becomes a kind of self-proving feature of the test, when that is NEVER an assumption you can make.

I'll give an example.  In Ethan Winer's video, he "debunks" the notion that phase matters in the reproduction of audio.  He plays a little clip for us, which is one of those late-60s demo arrangements that were used to introduce "stereo" to the market.  The program material has bits and stuff flying all over the stereo field, with dramatic panning.  Ethan uses this "test" to demonstrate that we can't hear when phase anomalies are introduced.  If I were to redo his example as a proper test, use his program material in a proper DBT, I would indeed display a lack of ability to discern the phase anomalies reliably.  In his test, he would (he does) declare therefore that phase just isn't a problem.  But that's an example of a poor test that has high confidence.  Why?  because phase defects in that context manifest themselves primarily as spatial differences.  His test design, using a proper DBT, has resulted in the conclusion that phase doesn't matter.  But the REAL result of his test design is that the DBT definitively proves not that phase doesn't matter, but that program material with active panning will MASK any perceivable differences due to phase error!

So what he's REALLY proven is that phase doesn't matter if I have a lot of active panning going on.  He has improperly generalized his result.

This is an example of a DBT used to pump the bona fides of a flawed test and flawed result.

dwoz

AES 2009 Audio Myths Workshop

Reply #26
I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

I think you need to look up the definition of a strawman.

Quote
Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

This not a problem of DBT alone, this is a problem of science in general. It also not an objection to the DBT protocol as such.

HA does not the support the notion that, just because something is done by a DBT test, the data is irrefutable.
"We cannot win against obsession. They care, we don't. They win."

AES 2009 Audio Myths Workshop

Reply #27
I prefer to call it a hypothetical that is commonly embodied in the real world.  You may call it a straw man, although a straw man argument usually has a nonsensical premise to it.  A straw man is a reductio ad absurdum, where my hypothetical has many real examples in the wild.

I think you need to look up the definition of a strawman.

Quote
Anyway, it is quite common to find folks who have conducted DBT, where the basic premise of their experiment is wrong.  The DBT provides good, high confidence data, but if the assumptions around that data are bad, then the test result will be flawed.

This not a problem of DBT alone, this is a problem of science in general. It also not an objection to the DBT protocol as such.

HA does not the support the notion that, just because something is done by a DBT test, the data is irrefutable.



Just to be safe, I went to the Well of Knowledge, the Wikipedia, and re-read what they have for "straw man".  It seems that I'm guilty of loose paraphrasing.  Both my statements about straw men are accurate for what they are, they just don't represent a particularly expansive definition.    In fact, the wiki definition seems to have been written in honor of Ethan, with him as the exemplar of the phenomenon.

If anything, I have tried very hard to be MORE rigorous in exactly converging on Ethan's specific statements, and have assiduously avoided creating straw men in my debate with him.

It seems we are then in agreement about DBTs....I certainly don't have any objections to them or their validity, and am not inferring that there is any.  My objection is to this deification of them that I see everywhere.

dwoz

AES 2009 Audio Myths Workshop

Reply #28
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #29
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.

Cheers,
David.



Not sure I quite agree.

In many cases, absolutely, yes.

The fact that expectation bias CAN be a factor, in no way demands that it is.  let me rephrase:  "it doesn't guarantee that the test will be any good, but its absence usually guarantees that the peer review is scathing.

AES 2009 Audio Myths Workshop

Reply #30
My objection is to this deification of them that I see everywhere.
It's not that - it's that almost any testing without a DBT is pointless. It's a basic fundamental pre-requisite. It doesn't guarantee that the test will be any good - but it's absence usually guarantees the test is worthless.
Not sure I quite agree.

In many cases, absolutely, yes.

The fact that expectation bias CAN be a factor, in no way demands that it is.
Well, there's one way to make sure, isn't there?

If that's not worth doing, then the test obviously isn't worth doing.

More common is that people claim that "the difference is so obvious that there's no need to ABX" - sometimes true - but it also means that a 16-trial ABX test would be trivial to complete in less than a minute, so in this case the answer would be "stop wasting your time and ours and just get on with it!"

Quote
let me rephrase:  "it doesn't guarantee that the test will be any good, but its absence usually guarantees that the peer review is scathing.
Depends where you publish the test. Without a DBT, it wouldn't get published in a pharmaceutical journal. Whereas the proudly proclaimed absence of an ABX test would probably cause it to be fawned over in Stereophile. For example

Keep doing it here and you'll just get banned.

I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that - though it might help the listener to focus, and sometimes helps the listener to make "lucky" guesses that turn out to be based on something so subtle they thought they were imagining it. This is the exception, rather than the rule, but it happens.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #31
I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that -
Cheers,
David.


oh, boy...you just stepped on a nail.

ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.

AES 2009 Audio Myths Workshop

Reply #32
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.

AES 2009 Audio Myths Workshop

Reply #33
oh, boy...you just stepped on a nail.

ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

An ABX test removes one form of expectation bias influencing the results, whereas for example a sighted test removes none. If you have a better proposal, please continue.

Quote from: mixerman link=msg=0 date=
I realize the answer is somewhat subjective, but I promise you there's an objective purpose to asking the question.

I'll answer 'no'. Let's get to the purpose.
"We cannot win against obsession. They care, we don't. They win."

AES 2009 Audio Myths Workshop

Reply #34
I suppose we'll grant an exception for tests where the listeners claims there's no audible difference  - not much point ABXing that -

oh, boy...you just stepped on a nail.

Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.


AES 2009 Audio Myths Workshop

Reply #36
Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.


This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215

AES 2009 Audio Myths Workshop

Reply #37
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.


Well now there's a distinction without a difference! My doesn't that nail smart.

Sure. Have it your way. If Ethan is convinced the differences in converter fidelity is "subtle" based on measurements, then why doesn't the expectation bias apply to him where listening confirmation is concerned? He never did an ABX to back up his claims (and he's made so many I don't know where to begin), and he actually claimed HE was the scientific control in his own listening test.

Here is the question I asked Ethan On Gearslutz: "When you did that experiment with the Soundblaster card, and you came to your conclusions (the new conclusion or the old one, doesn't matter), what exactly was your control?"

Here is Ethan's response: "My control was simply my own assessment that the recorded playback sounded the same as the source."

You guys find that scientific? You guys think it's acceptable that Ethan is his own control in a test that isn't blind?

Personally, I have no problem with evaluating gear in that manner for personal use, but if one is going to at all times hold others up to an ABX standard, then one should hold themelves up to that very same standard. Oh, and their peers as well.

Quote from: mixerman link=msg=0 date=
I realize the answer is somewhat subjective, but I promise you there's an objective purpose to asking the question.

I'll answer 'no'. Let's get to the purpose.


Let's not speak for everyone, now! I'll give it a day and see if anyone is willing to attempt to define a great mix.

Enjoy,

Mixerman

AES 2009 Audio Myths Workshop

Reply #38
I'm sorry...I fail to see where the specificity in your distinction matters?  Aren't we ALWAYS talking about results?


If we're talking about only the results (and not what's going through the head of the person performing the test) then let's talk about the 4 potential results of ABX testing.  Please assume an audio comparison for the purposes of discussion.

1. True negative - An audible difference does not exist, and none was identified by the listener.
2. True positive - An audible difference exists, and was identified by the listener.
3. False negative - An audible difference exists, but the listener failed to identify it.
4. False positive - An audible difference does not exist, but the listener appears to have identified a difference.

Do you have any quibbles so far?

AES 2009 Audio Myths Workshop

Reply #39
ABX removes expectation bias, correct?


Incorrect.  An ABX test removes the ability of expectation bias to generate a false positive result.



I'm sorry...I fail to see where the specificity in your distinction matters?  Aren't we ALWAYS talking about results?




"False positive result" = measured a difference that isn't there. For example due to placebo.

"False negative result" = failed to measure a difference that is there.


Since the interesting concept of "difference" is "audible difference", we are not interested in the cases where there are differences which no-one can hear. A "false negative" analogy of placebo could be that the listeners know what two setups they are considering, and fail to notice differences because they believe that there are none.

Say, assume the listeners do not believe in cable differences, and you tell the listeners that the speaker cables are the only thing different; then they might very well fail to spot real "audible differences" -- i.e. differences they might otherwise have heard and identified correctly.



And before hitting reply, read the spoiler:
Did I imply that cables would make audible differences? No, but I do imply that different setups might sound different even if the tester fools the listeners by a damn lie.




Edit:
I need to type quicker than this, I guess ...

AES 2009 Audio Myths Workshop

Reply #40
Ouch! Seems to be a valid point. You can "fool" an ABX test by not listening and choosing answers at random. Maybe we need an ABCX test where C is something that is known to be marginally non-transparent to probe whether listeners are paying attention.


This has the potential to generate a false negative result - not a problem.  See this old thread which I posted in for a discussion: http://www.hydrogenaudio.org/forums/index....st&p=401215



Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?

AES 2009 Audio Myths Workshop

Reply #41
Ethan ... makes a conjecture about the required measurements to fully and completely describe the fidelity of audio.  According to him, there's four.
Can we split this thread, and have a discussion about that in a separate thread, here on HA, if anyone else is interested?
I guess no one was interested - which is a shame, because this current thread is becoming more pointless by the minute.

I would second your point except that I have not yet really taken the time to analyze what Ethan has said on the matter in detail. The "conjecture" thing is not new - I noticed it in an article he wrote (skeptic.com?).

AES 2009 Audio Myths Workshop

Reply #42
ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.
It would really almost be silly for "difference believers" like (presumably) yourself to refuse the chance to volunteer as Ethan's test subjects at this point, wouldn't it?
Yet, I somehow don't think he'd have volunteers queuing up in his front yard, even if he posted fliers.

Instead, we're much more likely to find difference-believers huddled up in foreign corners of online activity singing battle-hymns against any form of blind or double-blind testing. That's more self-fulfilling prophecy in action than anything, is it not?
elevatorladylevitateme

AES 2009 Audio Myths Workshop

Reply #43
ABX removes expectation bias, correct?  Well, what makes you think that "listeners claims there's no audible difference" isn't ITSELF an expectation bias?  In fact, very commonly, that IS Ethan Winer's expectation bias.  It would be important to use ABX where the respondent knew that there would be control samples, to eliminate negative expectation bias.

again, we wrap around to test design.
It would really almost be silly for "difference believers" like (presumably) yourself to refuse the chance to volunteer as Ethan's test subjects at this point, wouldn't it?
Yet, I somehow don't think he'd have volunteers queuing up in his front yard, even if he posted fliers.

Instead, we're much more likely to find difference-believers huddled up in foreign corners of online activity singing battle-hymns against any form of blind or double-blind testing. That's more self-fulfilling prophecy in action than anything, is it not?


I wonder who you might be talking about?  I presume you're talking about me, but that doesn't sound like me. 

Being a test subject isn't silly.  Walking into a trap is.  At this point, Ethan is not interested in discovering if his conjecture is true or not, he's interested in avoiding proof that it isn't.  Doing a test designed by him is the same as Barack Obama walking into a GOP-sponsored roundtable on "how much does the Democrat President suck, and could he possibly suck more?"


AES 2009 Audio Myths Workshop

Reply #44
Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?

Sure - from a statistically valid viewpoint. I'm not sure this is a majority viewpoint on HA, but I think that all personal interpretation of statistically controlled listening tests requires some degree of stepping "out of the box" - outside of statistical science. All properly conducted listening tests have meaning - it's just that some meanings are easier to apply to one's situation than others. Negative results are harder to interpret, but to ignore them entirely is just nuts.

If I perform an ABX test myself, for my own benefit, using the best listening environment available to me, I would be extremely hard-pressed to reasonably dismiss the results of my tests as being invalid without invoking all kinds of ad-hoc claims about how blind testing screws things up. The success or failure of such a test can be almost directly interpreted in terms of my own listening abilities. That same ABX test means less to anybody else, and perhaps far less to some people.

The results involving a group of listeners are obviously open to additional discussion. Nobody's going to seriously claim, for instance, that the blind tests conducted during the development of DAB, resulting in claims of audible transparency in the 128-192k range (right?), are anything except laughable. And even if somebody comes up with a positive ABX result, there may be good reasons to dismiss the result and not worry about the effect under test, if one asserts that one's hearing is simply not good enough to matter. (In my case, I've ABX'd absolute polarity in the past, but ignored it after that, because of the insanely low level of effect and interference from transducers.)

When you have a large group of people tested to yield a negative result in what is essentially a well-run test - and this is the case with Meyer/Moran - you obviously cannot use the statistics to conclude that the effect is inaudible, but you also can't use the statistics to justify that anybody ought to care about it, either. But that's more or less the conclusion a lot of people (falsely) get from such results: that just because a hypothesis is statistically unproven (negative results after numerous controlled tests), means that anybody who believes in the hypothesis, using those results as justification, is nuts. The madness of such a conclusion is a little easier to spot if, instead of high res, we were instead conducting an ABX test of two identical glasses of water that had previously been mixed. Nobody can logically conclude from the negative results of such a test that the water in both glasses was identical, and yet one's logic is not generally called into question for such a belief - that the negative test result is a significant piece of evidence in one's justification of why the glasses are identical.

On a more technical level, I believe that the proportion of discriminators in most controlled audio tests is usually vastly underestimated, and in fact, in many situations, is pretty close to 1.

AES 2009 Audio Myths Workshop

Reply #45
Lacking controls for cohort competence would seem to me be grounds to invalidate a negative assumption from the test, wouldn't it?  Particularly when the result is being expressed POSITIVELY, i.e. instead of "no respondents could positively identify...." instead the result is expressed as "differences are not audible"?


I recommend reading on the null hypothesis concept.  If I get your meaning (and I'm not sure that I do) you appear to have this backwards.  I certainly did - looking at my previous post from the thread I linked I see that I described the distinction between hypothesis and result incorrectly.  Fortunately enough for me, I did include a link to a null hypothesis wiki article at the bottom (so I didn't look like a TOTAL nabob).

The null hypothesis of an ABX test assumes that no difference will be detected.  It is a successful result that carries with it positive wording.

Null hypothesis - "There exists no audible difference between the items being measured".  If a difference is identified, the null hypothesis is falsified (rendered false) and the opposite statement can be made.

Please help me out with your post above, because I don't follow you for these things:
1. What to you mean by "invalidate a negative assumption"?  Outside of the null hypothesis, there shouldn't be assumptions.
2. "Result is being expressed POSITIVELY" doesn't mesh well in my head with "difference are not audible" - the latter is a negative statement.
3. What precisely are you worried about with respect to "cohort competence"?  (since this is part of my 1st question, it may not be necessary to help me here - once I've seen that part of the discussion)

AES 2009 Audio Myths Workshop

Reply #46
you'll see time and again claims that "Ethan was proven wrong" even though nobody actually proved anything of the sort.

Just since I wrote that yesterday, I see many instances of the same accusations without evidence, and of course plenty of completely wrong facts and disingenuous claims:

Ethan start to trash our place, not the opposite, and I can tell you he doesn't say everything about the reasons he has been banned.

Apparently neither do you. Please explain why I was banned from the Womb. Links to relevant posts are always welcome.

the title of the thread over at the womb, where Ethan Winer's now-famous conjecture about the measurability and audibility of audio and the transparency of audio equipment, is titled "Pathetic" ... That thread title was created and CHOSEN BY ETHAN HIMSELF.

More lies. Yes, I used that name for the second thread title, after you "pathetically" locked the first thread once it was clear you could not defend your claims.

Quote
BAD FORM, Ethan, for being intellectually and technically sloppy in your work.  What could be worse, than a mythbuster that simply substitutes his own myth for the one he's debunking? Ethan makes a classic and easy mistake in his assumptions [blah blah blah] he has been shown to be factually wrong on every single claim he's made.

More claims that Ethan is wrong with not one shred of proof.

In Ethan Winer's video, he "debunks" the notion that phase matters in the reproduction of audio.  He plays a little clip for us, which is one of those late-60s demo arrangements that were used to introduce "stereo" to the market.  The program material has bits and stuff flying all over the stereo field, with dramatic panning.

You are ignoring on purpose the solo cello example which played first, and ignoring the fact that both of those demos were to show that phase shift is audible while it's changing and when the shift is different left and right! The demo that shows phase shift is not audible starts at 49:33 into the video, and uses the percussion breakdown from my Tele-Vision video as a source.

Quote
So what he's REALLY proven is that phase doesn't matter if I have a lot of active panning going on.  He has improperly generalized his result.

What you have proven yet again is you are a master of straw man arguing because the examples you cite are not what you say they are.

I have tried very hard to be MORE rigorous in exactly converging on Ethan's specific statements, and have assiduously avoided creating straw men in my debate with him ... the wiki definition [of straw man] seems to have been written in honor of Ethan, with him as the exemplar of the phenomenon.

How can you claim to be "MORE rigorous" when you got it totally backward? My example that you said shows phase shift as inaudible was in fact the demo that shows that it is audible. And you claim I'm the model of straw man arguing.

You've allowed Ethan to claim we moderated him at the Womb because we somehow can't debate him, and here you are moderating me.

The defining difference is that my posts at the Womb address technical issues, or matters of logic, while your posts here seem to be mindless whining about personalities and authority about who knows how to mix an album. Can you really not see the difference?

Ethan is not interested in discovering if his conjecture is true or not, he's interested in avoiding proof that it isn't.

Translated: "Ethan is wrong but I'll be damned if I can show even one example."

Thanks dwoz and MM for making my points for me.

--Ethan
I believe in Truth, Justice, and the Scientific Method

AES 2009 Audio Myths Workshop

Reply #47

No, Ethan.    YOU selected that thread title.  I'll go pull server logs and give you the PROOF.

No, Ethan.  You use that section of the video to show that phase shift is ONLY AUDIBLE WHEN IT'S CHANGING, and specifically say it ISN'T when it stops changing.  YOU are taking that out of context, not me.  Oh, and you're wrong as well.  It is QUITE audible in the cello.

RE:Proof.  always, you say "you've not supplied a shred of proof!"  well....not in THIS THREAD.  I've written upwards of 50 pages of direct proof about your claims. 

Oh, and THIS POST OF YOURS is a straw man, by it's very definition.  A classic example.  I've given plenty of examples.  I hardly think the readership here is interested in it, in this thread.  It exists, it passes peer review.

It's really too bad that you botched the job of debunking the audiophiles, because it's an important task.  Now all those idiots get to point out YOU as an example of why all the ABX'ers are missing the point (which they're not).

AES 2009 Audio Myths Workshop

Reply #48
I've written upwards of 50 pages of direct proof about your claims.

Excellent, please pare it down to one or two sentences per "Ethan error," and post here for all to comment on.

As for phase shift, you need to review that section of my video. You clearly misunderstood what's being demo'd.

--Ethan
I believe in Truth, Justice, and the Scientific Method

 

AES 2009 Audio Myths Workshop

Reply #49
I've written upwards of 50 pages of direct proof about your claims.

Excellent, please pare it down to one or two sentences per "Ethan error," and post here for all to comment on.

As for phase shift, you need to review that section of my video. You clearly misunderstood what's being demo'd.

--Ethan



perhaps I can invite you to do so?  I believe I did do a digest post or two...maybe I can dig them out of the lists.


The part that concerns me, is that one doesn't even have to go to tests, to disprove your conjectures.  They fall on general theory.  The other thing that concerns me, is that the debunking only required someone with my level of expertise, which is not terribly high.  i.e. if EVEN I can debunk you, then how will you fare against a REAL opponent?  You'll be cut to shreds and fed to the dogs (or audiophiles, those with a taste for gristle). 

What I would actually like to see, is for you to take my points, and offer reasoned, topical rebuttals that show exactly where my position is incorrect.  Not simply "you're wrong, show proof"...  How does Albert Einstein rebut high school senior Albert Feldstein, who thinks he's disproved general relativity?  By deconstructing the challenge in a step-by-step rebuttal.  You have NEVER done that, not once.  How do you support your position, besides saying "I'm just correct, trust me."?

And let's please not let the debate turn into a discussion about the debate?  That's also a place that I have NO INTEREST in going any longer.

dwoz