Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Overcoming the Perception Problem (Read 70659 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Overcoming the Perception Problem

Reply #75
Could it be shown already that results from double-blind listening tests correlate with the results of sighted tests over a large enough pool of listeners and different setups?


I think we've seen some probably unintended very large-scale sighted test collections to judge by.

The most recent one was the introduction of DVD-A and SACD.

Understanding of the relevant perceptual mechanisms and DBTs were accurate predictors of their success in the mainstream marketplace: They failed.

There was also an apparently unintended sub-experiment where about half the media brought to market in those so-called hi-rez formats turned out to be actually at or approximating the technical performance of medium they pretended to upgrade: CD-audio. Nobody got the joke from pure listening evaluations until people started blowing the whistle based on technical testing.

I take that as a reiteration of the original DBTs that basically said no audible differences due to the technically enhanced medium.

I conclude that if you want to make money by bringing some purported technical improvement in sound quality to the market, run ABX tests and if they strongly tend to null results, save your time and money and career and leave your proposed enhancement in the lab.

Overcoming the Perception Problem

Reply #76
Self-reporting about ones mental state surely carry some issues.

1) Is it conceivable that "hirez audio"/"snake-oil cables"/... somehow is registered by the low-level audio perception, then passed on to some subconscious part of our brain, but never to the conscious part of our brain? I guess it is within "conceivable". What would this mean in practice? It would imply that every audiophile claim about "A sounding better than B" was delusional, as their conscious brain never would have access to this information. It might mean that those who happen to listen via carbon-nano-kevlar cables are somehow "happier" than those of us who purchase simple stuff. Or it might mean that those individuals are more inclined to want to have a coke after listening than the rest of us. One might device tests that partially test this, but I think that the search-space and probability of interesting results makes it a bad career move.

2) Is it conceivable that some individuals react to the testing environment by decreasing their sensitivity to phenomena that they can otherwise easily distinguish? Perhaps. But involuntary (unknown) participitation in experiments does not seem to support it. Furthermore, if your capabilities are shaken by sitting alone in front of the abx plugin of foobar, how are you ever able to listen critically?

I dont get your point about not understanding the bias removal. When I spend hard-earned money on wine or loudspeakers or whatever, I want to know what I am getting. I want to know if it tastes differently to me, or if it is perhaps a case of "the emperrors new clothes". I know that I am prone to such biases, and I want to test the product using perception with and without them.

-k

Overcoming the Perception Problem

Reply #77
Please enlighten me. I am not a scientist nor have I had any training or study in this area. What positive and negative controls would one use to avoid this particular problem of bias I just mentioned?  Please be specific, thanks.


Start with two signals that are by any reasonable measure vastly different.  Then slightly less different.  Then slightly less different again.  Repeat in increments until the listener starts 'guessing' or their 'bias toward no difference' kicks in.

Seriously, this is a non-issue for most listeners.  In most cases I've read about where the DBT result was 'no support for the null H',  the listener *believes* they hear a difference both before and *during the test* as well.  In other cases they complain that the difference that they thought they heard 'sighted' suddenly seems harder to hear whne they're listening blind.  In neither case is 'bias towards not hearing' a credible factor.

Btw, have you read Pio's HA sticky thread about blind listening tests?

http://www.hydrogenaudio.org/forums/index....showtopic=16295

 

Overcoming the Perception Problem

Reply #78
Please enlighten me. I am not a scientist nor have I had any training or study in this area. What positive and negative controls would one use to avoid this particular problem of bias I just mentioned?  Please be specific, thanks.


Start with two signals that are by any reasonable measure vastly different.  Then slightly less different.  Then slightly less different again.  Repeat in increments until the listener starts 'guessing' or their 'bias toward no difference' kicks in.

Good example. Thanks. You then cherry pick the test subjects to get rid of the bad apples, I guess.

Can't say I recall ever reading of any DBT in the audio press which ever did this pre-screening you've just described, but I hope it or some similar procedure to preclude this particular bias is standard procedure in the academic world. [I can't edit my original post at this late date, however I didn't stress enough that the "mischievous" behavior of the listener may be (possibly) at a subconscious level. He/she would pass a lie detector test that they were "Doing their best", that is; they aren't "frauds".]

Quote
Seriously, this is a non-issue for most listeners.
So rather than applying the time consuming control you just described, we could simply ask potential participants if they might be biased on a conscious or subconscious level, instead.  Ha-ha!

Quote
Btw, have you read Pio's HA sticky thread about blind listening tests?


Yes. Take Rule #1 for example:
Quote
Rule 1 : It is impossible to prove that something doesn't exists. The burden of the proof is on the side of the one pretending that a difference can be heard.
If you believe that a codec changes the sound, it is up to you to prove it, passing the test. Someone pretending that a codec is transparent can't prove anything.


So even though random results don't prove anything, which I agree is correct, you seem to think that statistical analysis may be applied to them?! Huh? That's what I don't get. If no firm conclusion can be drawn, one way or the other, how on earth can one describe the probability/certainty of "no evidence/proof found, at least in this instance". NOTHING was established in the first place and nothing was proven so how can you describe the certainty of this "nothing", this randomness, as a percentage?! For all we know the randomness is caused by problems with the test design, such as A, B, and/or C , all things which can be immediately ruled out if the results go the other way however (where the listener can successfully hear a difference most/all of the time), and we have no way of knowing the "percentage of likelihood" of problems such as A, B, or C. [But I do like that control you suggested for B. Bravo!]

Overcoming the Perception Problem

Reply #79
So even though random results don't prove anything, which I agree is correct, you seem to think that statistical analysis may be applied to them?! Huh? That's what I don't get. If no firm conclusion can be drawn, one way or the other, how on earth can one describe the probability/certainty of "no evidence/proof found, at least in this instance".


Its probably helpful if you explain how you got from "proving a negative" to "knowledge does not exist".  Some of those steps may be questionable. 

Leaving aside whatever broader metaphysical point you were grasping at, I think the meaning of Pio's quote is that no one can prove that you can't hear something, so its up to you to show that you can.

Overcoming the Perception Problem

Reply #80
Please enlighten me. I am not a scientist nor have I had any training or study in this area. What positive and negative controls would one use to avoid this particular problem of bias I just mentioned?  Please be specific, thanks.


Start with two signals that are by any reasonable measure vastly different.  Then slightly less different.  Then slightly less different again.  Repeat in increments until the listener starts 'guessing' or their 'bias toward no difference' kicks in.


Good example. Thanks. You then cherry pick the test subjects to get rid of the bad apples, I guess.


um....no. Unless by 'bad apples' you mean people with significant hearing loss, which this control will indeed identify. Look, you've already admitted you have no knowledge of how the science is done.  I was merely telling you what Woodinville meant by 'positive control', and he's right, a rigorous experiment typically employs a positive as well as negative control. 


Quote
Can't say I recall ever reading of any DBT in the audio press which ever did this pre-screening you've just described, but I hope it or some similar procedure to preclude this particular bias is standard procedure in the academic world. [I can't edit my original post at this late date, however I didn't stress enough that the "mischievous" behavior of the listener may be (possibly) at a subconscious level. He/she would pass a lie detector test that they were "Doing their best", that is; they aren't "frauds".]


Do you recall the audio press DBT subjects ever NOT claiming they heard a difference, sighted?  I don't.

So, against the body of perceptual psychology data, tallying the ways humans are alert for 'difference' whether it exists or not, you posit a population who consciously assert things sound different, yet unconsciously think they sound the *same*.



Overcoming the Perception Problem

Reply #81
Let me get this straight, the subconscious mind is overruling decisions by the conscious mind that is presumably able to detect a difference?  What is the basis for such a counter-intuitive notion?

Overcoming the Perception Problem

Reply #82
So even though random results don't prove anything, which I agree is correct, you seem to think that statistical analysis may be applied to them?! Huh? That's what I don't get. If no firm conclusion can be drawn, one way or the other, how on earth can one describe the probability/certainty of "no evidence/proof found, at least in this instance".


Its probably helpful if you explain how you got from "proving a negative" to "knowledge does not exist".  Some of those steps may be questionable.

Sorry, I can't help you there, since I don't know what it is that I wrote that you equate as "knowledge does not exist."

Quote
Leaving aside whatever broader metaphysical point you were grasping at, I think the meaning of Pio's quote is that no one can prove that you can't hear something, so its up to you to show that you can.


I'm pretty sure Pio meant excatly what James Randi means when he says "You can't prove a negative." Randi "You can't prove a negative"

You can test 1000 subjects, listeners (or reindeer), and all it shows is that none of them on that day under those test conditions could hear a difference with any statistical significance beyond random guessing. If however they show an ability to hear a difference with strong statistical significance, then the test does prove, (or at least presents evidence for) the conclusion that on that day, with that music etc, some people can indeed hear a difference and we are 95% confident this wasn 't because of just dumb luck.


Overcoming the Perception Problem

Reply #83
Sorry, I can't help you there, since I don't know what it is that I wrote that you equate as "knowledge does not exist."


Easy enough then.

I'm pretty sure Pio meant excatly what James Randi means when he says "You can't prove a negative." Randi "You can't prove a negative"


Most people probably aren't going to watch a 10 minute video just to figure out what you're trying to say, so if its important, you might want to explain yourself.

You can test 1000 subjects, listeners (or reindeer), and all it shows is that none of them on that day under those test conditions could hear a difference with any statistical significance beyond random guessing. If however they show an ability to hear a difference with strong statistical significance, then the test does prove, (or at least presents evidence for) the conclusion that on that day, with that music etc, some people can indeed hear a difference and we are 95% confident this wasn 't because of just dumb luck.


Thats correct.  What was it you didn't understand ?

Overcoming the Perception Problem

Reply #84
Please enlighten me. I am not a scientist nor have I had any training or study in this area. What positive and negative controls would one use to avoid this particular problem of bias I just mentioned?  Please be specific, thanks.


Start with two signals that are by any reasonable measure vastly different.  Then slightly less different.  Then slightly less different again.  Repeat in increments until the listener starts 'guessing' or their 'bias toward no difference' kicks in.


Good example. Thanks. You then cherry pick the test subjects to get rid of the bad apples, I guess.


um....no. Unless by 'bad apples' you mean people with significant hearing loss, which this control will indeed identify.

Yes, that's essentially what I meant. The bad apples you weed out in pre-screening are the ones who aren't showing an ability to differentiate between the two sources with a pre-established small difference which should be audible to most.

You don't know if it is due to their poor hearing, malicious intent to skew the test, a lack of understanding how to vote, or what they are to do, or a preconceived notion (a bias) that it is ridiculous to think that power cords on CD players make an audible difference, so they are lackadaisically selecting A vs B and not really giving it their all [even though unbeknownst to them they haven't even started to hear those pairings yet, since they are still in a pre-screening stage looking to weed the biased people out!]

Quote
Look, you've already admitted you have no knowledge of how the science is done.

No, I said I wasn't a scientist. I do, however, understand that in a good scientific experiment there should be ways to prevent all forms of bias, even if that bias may be at a subconscious level and/or one thinks "that sort of bias isn't likely to have an impact on the results". There may be unforeseen reasons why it does have an impact that haven't been thought through. The test subjects not giving it their all, 100% focused attention, or rushing to give answers compared to the rest, was just an example.

The whole reason why we do double blind testing, not single blind, is for this exact same reason. There's no reason to think that a competent test administrator passing out the test forms and pencils to the test subjects would act or speak in a way that would influence the subjects, or give away the identity of A or B, however to be absolutely sure there's nothing we may have overlooked, we make them blinded too!

We don't just get rid of the forms of bias we suspect might have an impact, we get rid of ALL biases as best we can.

Overcoming the Perception Problem

Reply #85
Do you recall the audio press DBT subjects ever NOT claiming they heard a difference, sighted?  I don't.

I do, that is, individual test subjects in a DBT who thought the likelihood that there would be an audible difference was rather slim, pre-test, but if you want the exact name of the magzine article, date, page etc I don't have that stored in my memory. I suspect it is one of these, however.

Tests conducted of BAS and SMWTMS members would be first on my list to check, but I don't have the time to devote to such a task. IIRC individual results were broken down into "believers" and "non-believers", ie people who very well may be biased.
----

[Saratoga]
Quote
What was it you didn't understand ?
Answer: How a confidence level expressed as a percentage can be applied to the results of a test where the random outcome basically means "Oh well, the results are inconclusive; nothing was proven by this test."

The video is 7m 50s , not ten minutes, (and the first 30 seconds can be safely skipped)  It best explains what is meant by the expression "you can't prove a negative" since it is it explained by the man who actually coined it. He explains it much better than I can using a humorous example, but I'd rather not paraphrase the master. He does explain why nothing is proven by such a test.

Overcoming the Perception Problem

Reply #86
Quote
Rule 1 : It is impossible to prove that something doesn't exists. The burden of the proof is on the side of the one pretending that a difference can be heard.
If you believe that a codec changes the sound, it is up to you to prove it, passing the test. Someone pretending that a codec is transparent can't prove anything.


So even though random results don't prove anything, which I agree is correct, you seem to think that statistical analysis may be applied to them?! Huh? That's what I don't get. If no firm conclusion can be drawn, one way or the other, how on earth can one describe the probability/certainty of "no evidence/proof found, at least in this instance". NOTHING was established in the first place and nothing was proven so how can you describe the certainty of this "nothing", this randomness, as a percentage?! For all we know the randomness is caused by problems with the test design, such as A, B, and/or C , all things which can be immediately ruled out if the results go the other way however (where the listener can successfully hear a difference most/all of the time), and we have no way of knowing the "percentage of likelihood" of problems such as A, B, or C. [But I do like that control you suggested for B. Bravo!]



There are actually a lot of issues here.

You might want to have a look at http://en.wikipedia.org/wiki/P-value#Misunderstandings .

Then there is considerable disagreement on what a 'probability' really is. Frequentists refuse to attach probabilities to hypotheses (they are either patently true or patently false), while from a Bayesian point of view, such probabilities are acceptable quantifications of how well informed we are. Different interpretations of probability might be a proper disagreement over concepts, but even when it isn't, it might confuse the terminology.

Then there is 'random' vs 'uniform'. Whether you are guessing yes/no at 50/50, or if you have a hit rate of 95 percent, there is still some randomness left. Statistical analysis might be applied to check whether a claim that 'this is not uniform guessing' is any trustworthy. BTW, 'randomness' (for a suitable interpretation of the word) caused by test design could be both 'noise' and 'bias'.

Overcoming the Perception Problem

Reply #87
Thanks, Porcus. I'll check it out.

Overcoming the Perception Problem

Reply #88
Do you recall the audio press DBT subjects ever NOT claiming they heard a difference, sighted?  I don't.


Right. A potentially disturbing fraction of all DBT articles published in the mainstream and *underground* audio press were done within 50 miles of my house by people I know, including me.

We all heard differences in sighted evaluations, and often did sighted evaluations before we did the DBTs. 

Sighted evaluations are a good cheer leading technique to get people engaged in the DBTs.

Quote
So, against the body of perceptual psychology data, tallying the ways humans are alert for 'difference' whether it exists or not, you posit a population who consciously assert things sound different, yet unconsciously think they sound the *same*.


I guess the question is, "Knowing what you knew about the tests, did you kinda like know in your heart that no difference would be heard".  For me the answer would probably be yes, especially of how ever many years of testing. For many of the actual test subjects, they were all over the map - some believing that a different would be heard, some skeptical, and some believing for sure that none would be heard.

If  we fast forward to the tests that were on my now-departed www.pcabx.com web site, those tests included training files so that listeners were led in logical steps from posamatively hearing relevant differences based on files that were augmented so that the differences would be clearly heard as being different, to raw files that encapsulated the audible difference in technically correct ways.

A key point is that if you do the training files right, the listener doesn't know when the audible difference will become inaudible. In the beginning he can hear it "clear as a bell" and someplace along the way, in a set of files that he knows not which, his ability to hear the difference just vanishes. I would say that this is the best guarantee that his unconscious state has been made as irrelevant as possible, and quite irrelevant at that.

All this listener training didn't really change the outcomes. So much of what the high end has staked their credibility on is so far away from the now well-known thresholds of hearing that making logical improvements in the listening test process, even fairly heroic efforts like the extensive listener training described above, just can't help their cause.

Overcoming the Perception Problem

Reply #89
Do you recall the audio press DBT subjects ever NOT claiming they heard a difference, sighted?  I don't.

I do, that is, individual test subjects in a DBT who thought the likelihood that there would be an audible difference was rather slim, pre-test, but if you want the exact name of the magzine article, date, page etc I don't have that stored in my memory. I suspect it is one of these, however.

OK, update. My memory was correct. It was one of those, specifically:

Masters, I. G. and Clark, D. L., "Do All Amplifiers Sound the Same?", Stereo Review, pp. 78-84 (January 1987) [I read it at the time it was published, BTW.]

From it:

"The kind of listeners was important as well, and so the sample was made up both of people who professed to be able to hear differences between amplifiers, the 'Believers,' and of those who doubted their existence, the Skeptics'. "

"NOTES...

2. Believers believe that amplifiers sound significantly different, Skeptics are skeptical of that claim."

Interestingly, some of the Skeptics in the pre-test, open (sighted) warm up sessions (I presume in a room filled with Believers who were insistent they indeed heard differences) apparently "jumped ship" and thought they too could hear differences (hmm, "peer pressure placebo effect", anyone?) , however, not all of them. So we have good documentation of at least one study where not only were there no controls used to preclude any bias, conscious or unconscious, of some test subjects thinking they shouldn't expect to hear a difference and therefor might not have tried as hard (as just one possible example of how such a bias might influence a test, but there could very well be others as well), but we know that 10 of the 25 test subjects stated upfront that they were "Skeptics" and predisposed to thinking (I call that "biased") that amplifiers don't sound significantly different.

It is no longer on the web, however, there was a snapshot stored on the web archive "wayback machine" site I found, and here's a link to it:
http://web.archive.org/web/20060323085504/...o/Amp_Sound.pdf

----

The author agrees with Randi, "you can't prove a negative":

"After completion of the blind (cable-swap) or double-blind (ABX) testing, the listeners were given their scores...High scores can prove differences were audible, but random scores can never prove that all amplifiers sound alike." [emphasis mine]

The author assigned a "Probability Results Due To Chance [and not audible differences]" score to all instances where the percentage of correct answers was greater than 50%, however none is given when it is 50% or lower. This makes perfect sense to me from my way of thinking. You can't assign a certainty of "95%", or whatever, to a conclusion of "I didn't conclude anything", but my perception is that I am in the minority with that view, here in this thread, and as I said I don't have the time to devote to this, especially since it seems to be a battle I'd have to fight alone. 

Bye all!

P.S.

[Arny]
Quote
In the beginning he can hear it "clear as a bell" and someplace along the way, in a set of files that he knows not which, his ability to hear the difference just vanishes. I would say that this is the best guarantee that his unconscious state has been made as irrelevant as possible, and quite irrelevant at that
[emphasis mine]

YES! Finally. That's the way to do it, IMHO.

Quote
So much of what the high end has staked their credibility on is so far away from the now well-known thresholds of hearing that making logical improvements in the listening test process, even fairly heroic efforts like the extensive listener training described above, just can't help their cause.

Agreed. And just to set the record straight, I'm not attempting to help anyones "cause" except for science. [And thanks for inventing ABX, by the way. That was a huge contribution, unlike this relatively trivial matter!]





Overcoming the Perception Problem

Reply #90
Sorry - been away, but lots of noise (most - fascinatingly - irrelevant) generated in the interim but, to respond to a few points . . .

Are you claiming people are more prone to hearing things that aren't there than seeing things that aren't there - in respect of the brain "filling in gaps" to create what turns out to be an incorrect model of the real world? I'm not sure that's true.


Not what I said, or McGurk shows. Please re-read.

...but they've already hacked cat's heads about to find out what signals went in/out of the auditory nerve long before it was possible to do this in a humane way. It's relevant because they have similar cochlea to us, and it's found that, like us, the losses (what you can't hear) derive from the air to neural transduction process. So we've got cut up cats, predictions from physiology, and blind tests on humans all delivering the same "what difference is just audible" data - but you don't trust the blind tests? Yet you'll trust the brain response. That's strange, since no one is doubting that placebo is a real brain response - it's just not a response to what you hear!  And if we measure some brain response when people are not aware of what they're listening to (i.e. some response to A that is absent to B), either this will be associated with a conscious audible difference, or not. If not, who cares. If so, then having reported hearing it when not knowing what they were listening to, they've just passed a blind test! What does the brain scan add? Think this through. Draw a flowchart of the possibilities if it helps.
Cheers,
David.

We don't report a fraction of our brain response. Humans are not cats. And placebo is not relevant to either.

Overcoming the Perception Problem

Reply #91
It's best to be careful drawing conclusions from measured 'brain response'  (neural imaging) .  One of the 2012 Ig Nobel prize winners illustrates the point

Quote
NEUROSCIENCE PRIZE: Craig Bennett, Abigail Baird, Michael Miller, and George Wolford [USA], for demonstrating that brain researchers, by using complicated instruments and simple statistics, can see meaningful brain activity anywhere — even in a dead salmon.

REFERENCE: "Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Multiple Comparisons Correction," Craig M. Bennett, Abigail A. Baird, Michael B. Miller, and George L. Wolford, Journal of Serendipitous and Unexpected Results, vol. 1, no. 1, 2010, pp. 1-5.


http://www.improbable.com/ig/winners/


Any why, pray tell, would you claim 'placebo is irrelevant' to brain response.  The placebo effect (and expectations bias) ARE brain responses. The point is that 'brain responses' are not perfect correlates of objective reality. Just because the brain registers a 'difference' doesn't mean one exists in fact.

Overcoming the Perception Problem

Reply #92
@item:
Perhaps you could share with us a little about who you are so that we can put your point of view into proper perspective.

What do you mean by 'proper perspective'? Am I looking at an ad hominem warmup or a chat-up line?]

Let's just focus about the first part of my question and not worry about the second part. Please share with us a little about who you are and who you might represent.


OK: I'll stop worrying about being chatted up, and concentrate on worrying about the incoming ad hominem. Although, to keep my options open - in case you're just cheekily playing hard to get - I'm a 43-year old Gemini vegetarian: mainly single, with a good sense of humour and a two-bedroom flat in a nice part of South London. I'm not representing anyone: I like music, I think how we listen is interesting. I'm not presenting myself as an authority. I think it's important not to ignore facts, and to be impartial when reasoning from inferences.

Disorientated and deprived of cues?  You appear not to realize that double-blind testing can (and often does!) provide for the listener to audition the subjects/samples that are known as they are known.

I think that I don't understand this is to do with the way you've said what your saying but if its because its cleverer than it can be understood by me to understand then im sorry.

To elaborate on the part that you quoted, during testing I pay closer attention to details explicitly to listen for differences.  If I am listening casually, I do just that, which is to say relax and enjoy.  I have no doubt that my casual listening is done so with less acuity.

Brraap! Entirely different mode of listening - which is part of the point I'm making. Inconveniently, perception in general happens largely subconsciously - consciously directed modes are incredibly slow and weak by comparison. Co-ordination, peripheral vision, muscle-memory: all harness the fastest, most primal parts of the brain. So, no: like sex, relaxed is better. That was also a conclusion of the Beau Lotto experiment quoted earlier, among of course many others

To put it another way, when was the last time someone accused you of paying close attention because you forgot something you were told and were expected to remember?

I can't remember.
But what a beautiful diversionary analogy!

If part of my enjoyment has to do with knowing that my equipment or sample is XYZ then that is my business.  If I think I am actually perceiving something differently as a result I respect the forum and refrain from discussing it unless I can comply with its rules.

That's good to hear.

Perhaps, perhaps not; I really don't care either way. . . . .  MUSHRA is well accepted as a double blind test that provides for subjective grading. I don't think anyone with enough understanding is denying its power to foster truly tangible differences in the mind.  We simply aren't interested in reading about them here.  There are plenty of other places where you can indulge yourself.

It's evident that your interest and expertise lies in statistical analysis of experimental data. That's very exciting and important and everything, but we're discussing the circumstances under which that data is generated. Given that these experiments are fundamentally addressing the nature of perception - ie, are psychological in nature - it's curious to care so little about getting the experiment right. If the experiment is wrong, so is the data. Such loving analysis would then be so much turd-polishing.

Overcoming the Perception Problem

Reply #93
The sole, specific point I'm making is that DBT is rarely used in perception testing for obvious reasons outlined above, and attempting to smear its credibility from the physiological domain is intellectually dishonest.


And as I told you before, double blind protocols are common in pain perception studies, where self-reports of perception are the outputs. You going to address all that, or punt again?


And you posit an equation between pain and hearing?!

Quote
And that the abundance of negative results indicates coarse granularity in the test method as much as it supports any particular paradigm.


Quote
It 'indicates' that to you because you believe it should, not because it's necessarily true. And as I said before, this amounts to nothing more than an argument from incredulity, if not ignorance.


Certainly not: the point explicitly made is that the results are open to two interpretations. You are insisting on one interpretation so dogmatically that the other, equally valid one, is being dismissed. Consideration of alternatives is what I like to call argument by method.

And now, Mr Joined-in-August,  I trust you won't mind if I sit back and await your predictable departure (back?) to more woozy audio forums, where you'll claim you were driven out of HA for 'unorthodoxy' or 'thinking outside the box'.

Let's not snipe!

Overcoming the Perception Problem

Reply #94
The sole, specific point I'm making is that DBT is rarely used in perception testing for obvious reasons outlined above, and attempting to smear its credibility from the physiological domain is intellectually dishonest.


And as I told you before, double blind protocols are common in pain perception studies, where self-reports of perception are the outputs. You going to address all that, or punt again?


And you posit an equation between pain and hearing?!


I guess krabapple will claim an inclusion between 'pain perception studies' and 'perception testing'. Guess which way.

Overcoming the Perception Problem

Reply #95
I could be missing the point, or dozens of them, but it seems to me the hypothesis has been presented that the conditions necessary for proper DBT themselves alter perception in such a way as to strongly bias against what people are physically capable of sensing about the external world (i.e. that part not inside their head). The evidence for that hypothesis is that people regularly report perceptions that they are unable to repeat under DBT conditions.

We do know, because it has been demonstrated many times, that perception of signals which can be successfully and consistently identified by test subjects can be strongly overridden by an expectation introduced into the trials. Now subjects often report signals as being what they expect to hear, rather than what the signals really are, and even report the expected signal when they are led to believe they will receive it but have been given nothing.

These particular findings seems to present a case for being skeptical of claims made from sighted tests. The proposition here is that the expectations introduced in the tests are equivalent to the expectations introduced by really knowing which signal is been received. This proposition is at least somewhat supported by the fact that the expectation can be introduced by letting the subjects see what they believe are the sources of the signals (e.g. the cables, the amplifiers, the wine bottles) when the signals are actually from something else.

As far as I can see, the hypothesis that perception is really so much better and more pure outside of these restrictive test conditions is useless to science unless, and until, someone can think up a (repeatable) means to positively test it. Maybe the gods open deeper levels of perception to those filled with wine, love, and sympathy, and stop up the ears of those playing with that nasty science idea, but unless the gods decide to openly reveal themselves, we are unlikely to ever know. We can posit possibilities until the sun burns out but will never get any closer to knowing.

Overcoming the Perception Problem

Reply #96
I could be missing the point, or dozens of them, but it seems to me the hypothesis has been presented that the conditions necessary for proper DBT themselves alter perception in such a way as to strongly bias against what people are physically capable of sensing about the external world (i.e. that part not inside their head). The evidence for that hypothesis is that people regularly report perceptions that they are unable to repeat under DBT conditions.

We do know, because it has been demonstrated many times, that perception of signals which can be successfully and consistently identified by test subjects can be strongly overridden by an expectation introduced into the trials. Now subjects often report signals as being what they expect to hear, rather than what the signals really are, and even report the expected signal when they are led to believe they will receive it but have been given nothing.

These particular findings seems to present a case for being skeptical of claims made from sighted tests.


You trying for a Master's standing in understatement of what should be obvious to anybody with real world experience? ;-)

People who ignore expectation bias, along with the other systematic biases that afflict most amateur listening tests are just showing what little they know about the real world.

The three biggies are matching levels, listening to exactly the same musical selections, and managing expectation bias. Most audiophile listening evaluations ignore all 3.

Given the endemic nature of this sort of ignorant and sometimes willfully irrational behavior, most of these discussions about the alleged failings of well-controlled subjective testing can be dismissed out of hand.

IME trying to teach audiophiles how to do reasonable subjective tests is like trying to teach pigs to fly in that the usual result of the latter is that you at minimum upset the emotional state of the pig. ;-)

Overcoming the Perception Problem

Reply #97
My point was not that many reported tests involve confounding variables but that a hypothesis is useful only for speculative philosophers and metaphysicists unless there is some way to definitively differentiate its results from other possibilities. In physics that may be a matter of making small, cumulative steps that refine results. Maybe the theory will be eventually broken at the sixteenth decimal place, and something totally different revealed, but exact testing is needed to get to that point. In the matter under consideration, I don't recall any proposed means of removing belief and expectation bias from sighted tests, no matter how well all other variables are controlled.


Overcoming the Perception Problem

Reply #98
It's best to be careful drawing conclusions from measured 'brain response'  (neural imaging) .  One of the 2012 Ig Nobel prize winners illustrates the point

Quote
NEUROSCIENCE PRIZE: Craig Bennett, Abigail Baird, Michael Miller, and George Wolford [USA], for demonstrating that brain researchers, by using complicated instruments and simple statistics, can see meaningful brain activity anywhere — even in a dead salmon.

REFERENCE: "Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Multiple Comparisons Correction," Craig M. Bennett, Abigail A. Baird, Michael B. Miller, and George L. Wolford, Journal of Serendipitous and Unexpected Results, vol. 1, no. 1, 2010, pp. 1-5.


http://www.improbable.com/ig/winners/
Thanks - this is great stuff. They clearly had fun writing the "method" part of the write-up...
http://www.jsur.org/ar/jsur_ben102010.pdf

Cheers,
David.

Overcoming the Perception Problem

Reply #99
Certainly not: the point explicitly made is that the results are open to two interpretations. You are insisting on one interpretation so dogmatically that the other, equally valid one, is being dismissed.
I'm not sure they can be equally valid. One has been proven to be true sometimes (people swearing they hear a difference when nothing has been changed), the other cannot be tested.

Does an untestable hypothesis even have a place in science?

Cheers,
David.