To rephrase then: what, in any trial, is blind testing designed to filter out?
What doesn't need attacking, because it's self-evident, is that psych evaluations often depend crucially on the subject not being aware of the purpose of the test. Why is that?
I think you may have misunderstood the Lotto Labs experiments I referred to: maybe check them out. They aren't about manipulating the tester's state: they explore how (surprisingly) easily perceptual states are changed by environmental conditions: changing the subject's mind changes the subject's mind . . . .
The degree to which perception can be differentiated from subconscious neural acticity is a whole different (and tangential) question.
Quote from: item on 08 October, 2012, 02:52:29 PMTo rephrase then: what, in any trial, is blind testing designed to filter out?Bias
Quote from: item on 08 October, 2012, 02:52:29 PMI think you may have misunderstood the Lotto Labs experiments I referred to: maybe check them out. They aren't about manipulating the tester's state: they explore how (surprisingly) easily perceptual states are changed by environmental conditions: changing the subject's mind changes the subject's mind . . . .Not in the slightest. They aren't about manipulating the tester's state, but they are dependent upon manipulation of tester's state. Regardless, you dodged the question. Call it "tester's state" call it "environmental conditions", call it what you will. Where is your argument, much less your evidence, that the manner of ABX testing practiced creates a systematic bias in "environmental conditions"? The Lotto experiments were dependent on such a systematic influence. If you can't demonstrate one They Are Not Relevant.
Quote from: item on 08 October, 2012, 02:52:29 PMThe degree to which perception can be differentiated from subconscious neural acticity is a whole different (and tangential) question.Agreed, it is totally off topic and undefended. But it is one you brought up.
In psychology it's axiomatic that for many experiments the subject must be unaware of the nature of the test (see Milgram).
Partly, yes: more specifically, in the clinical domain 'blindness' separates the psychological from the physiological. From the perspective of a drugs trial, psychological factors are generally extraneous and need to be excised from the process. From the perspective of an auditory trial, 'psychological factors' are the subject of the test.
Although it's tempting to reach for the conclusion that almost everything is identical, there is an equally valid interpretation of results generated by DBT tests which invariably demonstrate a diminution of differences (oranges become more like lemons, Stradivari become more like toys, speakers become more like speakers) that seem apparent when sighted: namely that the method itself results in a diminution of differences. The more 'blunt tool' results emerge from blind perception tests, the less credible they look.
a failed DBT is not used as universal proof that two things must sound the same
Part of Beau Lotto's 'Public Perception project was documented by the BBC Horizon programme in 2011: http://www.bbc.co.uk/programmes/b013c8tbhttp://www.lottolab.org/programmes-article...nperception.aspHe expressed surprise that such a subtle a priori manipulation of the subject's self-esteem (of all things!) would depress visual acuity to the extent it did. [...]Although it's tempting to reach for the conclusion that almost everything is identical, there is an equally valid interpretation of results generated by DBT tests which invariably demonstrate a diminution of differences (oranges become more like lemons, Stradivari become more like toys, speakers become more like speakers) that seem apparent when sighted: namely that the method itself results in a diminution of differences.
This is an argument from incredulity. You just can't believe that your/our sighted perceptions could be so *wrong*.
Explain, then, how people DO get it so wrong, in the case where two bottles of the same wine 'taste' vastly different, depending on how they are labelled. Or in the case where the listener 'hears' a vast difference between unit A and unit B, when in fact unit B has never even been put into the circuit.
@item:Perhaps you could share with us a little about who you are so that we can put your point of view into proper perspective.
This site has TOS #8 in place to keep the signal to noise ratio high. As has been aptly pointed out, sighted tests provide absolutely no guarantee of reliability, whereas positive double-blind tests do. The only concern on the table that could possibly have any validity (I am being generous) is that double-blind testing might reduce the sensitivity of the person taking the test. This is perfectly fine since a failed DBT is not used as universal proof that two things must sound the same which is generally where those arguing on behalf of placebophiles get tripped up.
FWIW, as a professional tester I can tell you that I actually pay closer attention to detail when I am consciously involved in a test, despite DBT skeptics and snake oil salesmen telling me that I can't or don't.
You can of course link to reliable scientific metastudies documenting that 'invariably' and that the differences that disappear, are for real? Not that the effect is proven every now and then in certain setups, but that it 'invariably' is so?And still this is no excuse for uncontrolled tests. Under the (hardly controversial) hypothesis that test setups can manipulate the test subjects in a manner crucially affecting the results, one takes measures to test ceteris paribus; that is, at least until one can establish that a certain manipulation introduced to the test setup is more likely to get you true answers (not only cocksure answers) and can be administered reliably. Good if we can find one (and in certain setups one can isolate the effect of this and correct for it, which is unlikely to happen if a marketing guy is about to convince you).
Look, the scientifical conventions are themselves biased towards the null hypothesis: one has accepted standards which by themselves fail to accept lots of true answers, this because one does not want to accept false answers. That means that a lot of facts will have the status as unconfirmed hypotheses for a long time. If the only test result that could possibly indicate a difference is prone to indicate a false difference stemming from mumbojumbo marketing, then one should not accept it as evidence. Too bad if the difference is for real, but that's the cost of not being gullible. (Too bad if the millions I was offered from Nigeria yesterday were for real too, but heck ...)
The only people I know of to shun DBT method of testing audio equipment are those that live off them in some way (editors/writers of hifi magazines, hifi salesmen) and people who themselves believe in their superiority over the common plebs in terms of hearing. The first kind knows that utilizing DBT in their reviews would result in sales plummet, and the loss of income through payed reviews and advertising. Second ones are often technologically disabled, and are more prone to explain things to themselves (and, more dangerously, to others) through pure magic and rituals, than to actually learn what is going on, because the bubble they live in would burst.Now, you say DBT testing would somehow influence the listener and he wouldn't hear the difference because of reasons. Bear in mind that these people often claim the "sky-earth" difference between two DACs, for example, so I hardly believe that testing that difference over the course of time (a month, a year) would involve any stress, and that they wouldn't hear even the tiniest difference, if it really exists.That argument is so invalid - if you are so easily affected by switching buttons form A to B, X to Y, than I am sure that every listening to the same song is a new experience, and it sounds different altogether and that difference, either is there or is not, that difference does not exist only when we are casually listening to music. Humans can't telepathically effect the bitstream in DACs or optical cables yet. It doesn't care what are you feeling, it just - streams and decodes, over and over again, every time you play the song.I really don't care how the ABX testing in medical research works - I'm not into medicine at all, and for the hydrogenaudio's sake, it shouldn't matter. Only thing that matters is audio ABX test, which serves to individuals to see if they really can hear difference between two codecs, or two DACs, if they have equipment to set this up. Individuals set up the testing environment as they prefer (I like drinking cocoa, for example), and the test is straightforward in the results - either you can hear the difference, or you can't. If you can't, that doesn't mean there is none, it just means that you can't hear it. Someone else might.So, why do you try so hard to convince us that ABX isn't valid method?
The sole, specific point I'm making is that DBT is rarely used in perception testing for obvious reasons outlined above, and attempting to smear its credibility from the physiological domain is intellectually dishonest. And that the abundance of negative results indicates coarse granularity in the test method as much as it supports any particular paradigm.
What I can perhaps maybe possibly gather from your posts is that blind perception experiments are crude. What I don't gather is how the negative results have any impact on the positive results. Ideally you would stop rambling and be more concise, but at the very least, you should explain explicitly what it is that is faulty. You begin by questioning "the credibility of ABX from the physical domain to perception testing." If it is this broad challenge, then consider the fact that Signal Detection Theory and discrimination experiments, ABX being one of them, have widespread use in the speech perception literature. Are you suggesting that subjects performing well in spite of the lack of cues leads to flawed conclusions?But the issue closer at hand seems to be the much narrower one, of the retaliatory use of negative ABX results as evidence that things sound the same. And yet you keep responding to comments about the scientific (in)validity of this with comments such as "The problem is that negative results equally indict the efficacy of the method" without explaining how this is so.
The only reason there is so much "failed" ABX tests is simple - people tend to believe in many magickal beings living in their amps and speakers and wires and headphones. But when put in front of magnifying glass, those little creatures tend to disappear. It's human fault in believing in ghosts rather than the fault of the testing method. And that's it.
Positive DBT is inherently cast-iron. The problem is that negative results equally indict the efficacy of the method, and that DBT perception tests are anathema: they generate results with poor resolution: they conform suspiciously well to the 'bad test' model: ie, they generate positives for gross phenomena but fail to recognise fine-grained distinctions. Wrong sieve size is a plausible diagnosis. Given that the test is misappropriated from a different domain and therefore - by definition - crudely tampers with its objective, this isn't surprising.
Quote from: item on 09 October, 2012, 06:42:37 AMPositive DBT is inherently cast-iron. The problem is that negative results equally indict the efficacy of the method, and that DBT perception tests are anathema: they generate results with poor resolution: they conform suspiciously well to the 'bad test' model: ie, they generate positives for gross phenomena but fail to recognise fine-grained distinctions. Wrong sieve size is a plausible diagnosis. Given that the test is misappropriated from a different domain and therefore - by definition - crudely tampers with its objective, this isn't surprising.With respect, this is demonstrably wrong. The core tests that probe the very limits of human hearing use blind testing, and deliver results that match predictions from the known physiology of the ear. To get these results takes careful training - people need to learn what to listen for before they can hear as well as the physiology would predict.You've written many words, but like most blind testing bashing, it comes down to this: "when people are under test, they listen differently so we can't know what they really hear. If they don't know what they are listening to, they are even more stressed." What if they do know what they are listening to, like in most hi-fi magazine reviews? The people are still "under test", yet seem to hear just fine? Given that knowing what you are listening to is both the only differentiating variable, and a known feature that will give completely unreliable results, you are either wrong (that's my guess), or are correct and have just kicked audio into a "never possible to know" philosophical world.Cheers,David.
DBT is designed to separate subjective and objective responses and provide hard-to-falsify positive outcomes.
Its suitability for physiological testing is not transferable to psychological testing but negative outcomes are aggressively touted as meaningful.
This forum is a good repository of DBT acuity depression 'invariability' - or, by another interpretation of the same results - The Truth.