The belief, that sound was coming from a impressively crafted sound system, was able to significantly alter the subjects perception.
Is that not pretty much a summary of the placebo effect?
Not necessarily. Suppose I want to establish a “sufficiently good” (for whatever purpose) end-user format. Then I am not satisfied with your score on your music, unless I am only targetting you as a customer. Even if your music does not have nasty enough artifacts for you to detect (or find annoying), it might be different with other ears and other signals. (Of course, you then need to use the appropriate method (test / design of experiment) to check whether the accuracy is better than random, but that is a practical obstacle.)If 5 percent of the listeners hear differences on 10 percent of their music collection, is then the format “transparent”? I think not. It may be good enough for the purpose, by all means, but it does not mean that there are no audible differences.
Yes. Does this change anything? Placebos could be shown to have significant causal effect.
Buyer's remorse is starting to hit hard. Ahh, relief! The sighted test says he made the right choices after all.
I am conducting ABX test for MYSELF.
Quote from: krabapple on 12 October, 2012, 01:59:16 AMExcept, DBT doesn't do that.Why do sighted test regularly lead to different results, then? Just calling it "bias, that should be eliminated" doesn't change the fact.
Except, DBT doesn't do that.
Imagine the following test setup: A test subject is presented music supposedly sourced from either a Sansa Clip or his favorite Burmester rack. You present an expensive looking switch to him, that's basically a dummy and that only inserts a small pause, but connects to the Clip at all times. Now imagine, you'd get a statistically significant result, that the subject rates the sound quality consistently higher, when he believes it to be coming from his Burmester rack / not coming from the Sansa Clip.Now do a second test, this time double blind with both sources actually connected. Imagine the subject now fails to identify a difference.What can we draw from this, especially when the subject was a honest type, sincerely motivated to rate the quality exactly as he perceived it in the first setup, without trying to prove or defying anything?
First, HA habit, the subject should stop claiming, that his Burmester setup sounds better than a Sansa Clip, as proven by the DBT. HA usually stops here.But maybe one shouldn't. The belief, that sound was coming from a impressively crafted sound system, was able to significantly alter the subjects perception. In addition, the subjects usual mode of listening is reflected much better in the first setup than in the second (DBT).
Quote from: hlloyge on 12 October, 2012, 05:14:41 PMI am conducting ABX test for MYSELF.Even still, a failed test only fails to demonstrate that an individual can distinguish a difference during that instance. Training and/or rest may affect the outcome of a future test, as examples.
Yes, but I am conducting the test at that one point of time.
I think Googlebot is making a valid philosophical point. If the shiny thing sounds best to you (because of placebo), and you want the thing that sounds best to you, you can be very grateful to the shiny thing and placebo for delivering it.Though you'd better not probe too deeply - because, while the only guaranteed way to completely remove placebo is to take away the knowledge of what you're listening to, you can certainly reduce placebo (or change the direction in which it operates) by introducing doubt as to whether something really does sound better.This latter effect is probably the cause of the audiophile's never ending upgrade path.The downsides to this are many. e.g. 1) your entire investment can be rendered worthless to you by anything that causes placebo to break down - that's a pretty risky investment.2) if you had blind tested before purchase, you would probably have chosen the cheapest well-made thing that sounded as good as everything else - saving you money, and giving you your own unshakable placebo effect in enjoying that equipment - you see, ABX-lovers can enjoy their own placebo experience after having chosen the equipment. They've proven scientifically that it's as good as it needs to be, and then placebo can add to the subjective perception that it's as good as they could possibly perceive it to be.3) some nice looking equipment sounds objectively awful, and doesn't work very well. While you might be able to convince yourself that it sounds wonderful, you'll still have the pain of unreliability, quickly/difficult operation, and the anxiety of damaging or wearing your music collection away every time you play an LP (if sighted testing led you to choose vinyl over CD).However, sighted listening equipment purchasing is great for the economy (you just keep spending money), and it avoids time consuming things (e.g. proper listening tests), and difficult questions such as "how well can you hear anyway?"Cheers,David.
Quote from: skamp on 11 October, 2012, 09:28:53 AMIf ABXing negatively alters one's ability to hear differences, it's only a problem if you're using negative results to prove that there is no difference, which is a fallacy in any case: while a positive ABX result shows with a high degree of probability that there IS an audible difference, a negative result never proves anything.Basically, what a 'negative' ABX results means is that the hypothesis 'there is an audible difference' was not supported, with a 'p' chance (typically 1 in 20) that an audible difference nevertheless exists .
If ABXing negatively alters one's ability to hear differences, it's only a problem if you're using negative results to prove that there is no difference, which is a fallacy in any case: while a positive ABX result shows with a high degree of probability that there IS an audible difference, a negative result never proves anything.
B. The listener was mischievous and *intentionally* gave random results.
I don't see how you can say that statistics only work for 'positive' ABX results. Or maybe I'm just not understanding what you are getting at. I didn't disagree with what skamp wrote...at least, not intentionally!
What good are statistics if the listener acted in bad faith? If he decided to answer randomly, the result is meaningless. Whereas he could hardly act in bad faith in the other direction.
[Trying to bring this back on topic]There is not a big distinction between consciously selecting random results (acting unethically/ in bad faith) and simply not trying very hard because one thinks, perhaps at least subconsciously, that A and B *should* sound alike so they don't bring their "A game" and simply "phone it in". That's another form of expectation bias and we don't have a good way to preclude it. This is why applying statistical analysis to such results seems unsettling to me. You never know for sure why the results are random.
Here's an example, for all: If asked to participate in a DBT of "the bass response of aftermarket power cords", all of adequate gauge thickness to conduct the current required by the CD player, how many of you would bow out on the grounds that you wouldn't be a good test subject because you find the premise laughable and you'd therefor be biased?
Quote from: hlloyge on 14 October, 2012, 08:35:29 AMYes, but I am conducting the test at that one point of time.Strange to read your initial postings in this thread now after you have tried to downplay the applicability of the test to a one-time personal experience with a clear-cut full sensitivity/specificity.