HydrogenAudio

Hydrogenaudio Forum => Listening Tests => Topic started by: Yahzi on 2013-02-19 22:30:11

Title: Controlled testing
Post by: Yahzi on 2013-02-19 22:30:11
I know bias-controlled testing is brought up when discussing amps, CDPs and cables, but the testing methodology itself is almost never explained in detail to the laymen - ie a step by step tutorial. I think it would great if people could do this kind of testing at home for themselves. My question is, is it possible for anyone to do or does it require extensive know-how to get right?

Let's assume you wanted to test two amplifiers or two DACs - to the amateur, what is he in for? How complicated is the process? This thread is really just to outline what is required so that others can do the testing at home.
Title: Controlled testing
Post by: Yahzi on 2013-02-20 11:03:50
I see this forum does not get much activity. Perhaps the general section would have resulted in quicker replies.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-20 13:17:07
I know bias-controlled testing is brought up when discussing amps, CDPs and cables, but the testing methodology itself is almost never explained in detail to the laymen - ie a step by step tutorial. I think it would great if people could do this kind of testing at home for themselves. My question is, is it possible for anyone to do or does it require extensive know-how to get right?

Let's assume you wanted to test two amplifiers or two DACs - to the amateur, what is he in for? How complicated is the process? This thread is really just to outline what is required so that others can do the testing at home.



I'll tell you how this was done in the past.

The following pages detail a hardware ABX comparator that was produced for a number of years several decades ago:

http://home.provide.net/~djcarlst/abx_hdwr.htm (http://home.provide.net/~djcarlst/abx_hdwr.htm)

Approximately 50 systems were sold.

A photograph of one of these systems can be found in the fairly recent Meyers and Moran JAES paper about high resolution audio.


Another more highly integrated ABX Comparator was produced in the 1990s by the well known professional audio power amplifier company QSC.

http://home.provide.net/~djcarlst/abx_qsc.htm (http://home.provide.net/~djcarlst/abx_qsc.htm)

Some accounts suggest that from 100 to 200 systems were distributed, some to QSC amplifier dealers.

In the absence of further questions, the means for using the ABX Compator system I mentioned first seems to be self-evident.
Title: Controlled testing
Post by: mzil on 2013-02-20 16:07:01
An ABX comparator switch box is definitely the best way to go, however buying one today I suspect would be quite difficult. A quick search for a used one on ebay, I just did, also found nothing.

As a laymen I conducted a test of my golden-eared audiophile friend, some years back, to see if he could distinguish between a Mark Levinson power amp and a Yamaha integrated amp, (with less power and selling for one seventh the price of the Mark  Levinson) kept below clipping level of course. It was to settle a small bet. To up the ante I allowed him to use differing speaker wire on the two amps [his premium variety of choice vs. hardware store grade zip-cord (16 or 18 AWG, if I recall correctly)]. Having two variables, the wire and the amp, makes this a less than ideal scientific test, but like I said, this was really just to settle a bet.

The switching methodology was by hand [cable swapping], which is much slower of course, but this was to his liking as it eliminated any question in his mind if the switch box was introducing any audible degradation to the sound [his fear, not mine], which arguably might mask any subtle differences. Considering the countless number of reviews he had read in magazines that often make comparisons between products not even listened to on the same day, it is not surprising he was quite accepting of the delay at each switch. In truth, acoustical memory is fleeting, and to hear subtle differences, reliably, rapid fire switching is called for, but lucky for me he believed otherwise. [Each switch took about 15 seconds to accomplish, I'd say.] Alternatively a purely passive speaker switch box, used in reverse, could have made much faster switches, but he didn't want that.

Also lucky for me was the fact that the Yamaha amp had a purely analog volume knob. [Rare these days, at least in typical AV receivers which use rather course .5 dB steps at best. You really need more like .2 dB or even .1dB accuracy] This was how I was able to level match it,  to a small fraction of a dB, to the Mark Levinson without the need of introducing an outboard device. I assumed the frequency response of both was flat and I used a 1 kHz test tone on a CD as my signal generator. Even though I have no training in electronics I was able to figure out the basics of how a $20 Radio Shack AC voltmeter worked and I used its nifty dB scale to match the levels by tapping the signal at the speaker wires. [The speakers themselves were the test load so I had to endure hearing the tone as I matched the gain level of the Yamaha to the fixed gain of the Mark Levinson.] A quick check of the L vs R balance found there was no need to touch the balance knob, it was close enough.

Once calibrated he was then free to adjust the master volume of the high end preamp [also "Mark Levinson" (Proceed brand, to be exact)] at will, which fed the two amps simultaneously, and zip around freely between tracks on the music CDs he brought via the hand held remote, from his seated position. He recorded his test answers on a clipboard he kept by his side. The strictly stereo only system we were using (no surround sound or video) cost about $13,000 USD at the time.

I also had to take a crash course on statistical analysis and by mutual agreement it was decided that he had to correctly determine the correct amp in 13 or more of the 16 trials to win the bet, which I had even decided to give him 2 to 1 odds on, in favor of him. [Even 12 correct would have been probably good enough, actually, but I had given him favorable odds so I wanted to be sure.] I literally used a coin toss to determine how I would set the identity of A vs B for each trial, since I acted as the test conductor.

Because I was in the same room and had to announce after each switch, obscured from his view, "Ready when you are.", it was not truly a double blind test, just single blind, but I think I did a fairly good job of saying these words without any inflection, or facial expression/body movement, to give anything away. I stood behind him off to the side during the testing.

I won.

Doing this as one person would have been impossible, but as you can see, with an assistant it can be pulled off in a single-blind manner. Long wires leading to another room could have made it truly double-blind. [Making the test conductor, aware of the true amp identities, 100% isolated from the test subject. The "ready to begin" instruction could be a light, instead of a verbal command.]

If you have any questions, Yahzi, let me know.
Title: Controlled testing
Post by: Yahzi on 2013-02-20 17:03:12
It's seems daunting to me... 
Title: Controlled testing
Post by: mzil on 2013-02-20 17:20:20
If you have the means to level match ( a $20 voltmeter and a test CD, or you could probably download the tone from the internet) and a precise volume knob on one of the devices to match to the other one, the test preparation takes just a few minutes to accomplish. The switching has to be done by a third party or your test won't be blind, and therefor bias is introduced.

For rapid fire switching you can use a passive speaker selector box  wired in reverse for speaker level switching, for example, or different line level inputs of a preamp/receiver to conduct testing of line level devices.

I wouldn't call it "daunting" but it does take some time, yes.

P.S. I had read up on how others had done it first. That helped. Stereo Review, what is now called "Sond and Vision" magazine, had some articles on it I read.

Don't know if this helps but here's some more reading I just googled up:
http://bostonaudiosociety.org/bas_speaker/abx_testing.htm (http://bostonaudiosociety.org/bas_speaker/abx_testing.htm)
Title: Controlled testing
Post by: Yahzi on 2013-02-20 17:23:49
Thanks for the in-depth reply, mzil. What do you mean by precise volume knob? What if the volume knob is imprecise?
Title: Controlled testing
Post by: mzil on 2013-02-20 17:43:46
To conduct a fair test, the two devices must be played at the exact same volume, right? So before starting the test they have to be level (volume) matched. The stereo Yamaha integrated amp I used, I think it was called AX500 (?), had a nice big volume knob which I could rotate ever so slightly to change its level until it was the same as the Levinson's fixed output level, as I peered down at the Radio Shack "VU level meter", while playing the test tone. [You are setting them by meter, not by ear] The problem is most current preamps and receivers, at least the AV variety I am more familiar with, usually have digital displays showing only .5 dB increment steps. That's too course and not precise enough. You need .2 dB at the very least, or ideally .1 dB steps. This would be a rare example where people who say "analog is better" are actually right.

If the volume knob is too course you won't be able to level match the two devices to each other precisely enough such that you can be certain that the difference being heard isn't actually just a small level difference. It is VERY common that small level differences, of say a half dB or so,  are misheard by humans' hearing mechanism as "quality" differences, not quantity differences.
Title: Controlled testing
Post by: Yahzi on 2013-02-20 17:57:12
That makes complete sense! But if you are comparing two amps with different volume knobs and you can't adjust the volume in 0.2 dB or 0.1 increments then all bets are off?
Title: Controlled testing
Post by: dhromed on 2013-02-20 18:16:31
My Denon receiver has an analogue knob, but it's performance is riduclously innaccurate and "smoothed staircase"-like, so I personally wouldn't use it for any such tests, unless I could modify the hardware to curcumvent the knob, or use software to control the volume somehow,
Title: Controlled testing
Post by: Porcus on 2013-02-20 18:19:53
That makes complete sense! But if you are comparing two amps with different volume knobs and you can't adjust the volume in 0.2 dB or 0.1 increments then all bets are off?


You can of course feed one of them a 0.2 dB louder signal, but you can also bet your backside that someone will complain that the reason they don't hear any difference between their outrageously expensive component and an off-the-shelf product, is that you have allegedly run the precious signal through a meatgrinder which renders everything equal
Title: Controlled testing
Post by: mzil on 2013-02-20 18:22:49
If you connect the test meter, play the test tone and discover the two units play at a different level which you have no means to correct for, then yes. All bets are off.

You can also level match two devices, say if neither has any level control at all, by introducing an attenuating volume control (or device) in the signal path of at least the louder one [or both]. Good ones aren't cheap, however.

edit to add: And as Porcus just mentioned, some might argue that any in-line device, regardless of price, will "distort" the integrity of the signal.  That was the beauty of the test I conducted, the audiophile snob couldn't make any such claims since my volume matcher knob was already integrated into the "inferior" product, yet he was unable to hear a difference with any statistical significance.
Title: Controlled testing
Post by: Yahzi on 2013-02-20 18:29:18
So then comparing different equipment isn't so straight forward. Certain conditions need to be met first.
Title: Controlled testing
Post by: mzil on 2013-02-20 18:33:21
If you don't care about validity, then no conditions need to be met. Play them sighted, at different levels, with different rooms, gear, on different days, different music, whatever you want. That's how most audio magazines seem to do it!

Published scholarly papers in the Journal of the AES, however, do it more like I do.
http://www.aes.org/e-lib/browse.cfm?elib=5549 (http://www.aes.org/e-lib/browse.cfm?elib=5549)
Title: Controlled testing
Post by: Yahzi on 2013-02-20 18:47:57
Mzil, I wanted to send you a PM as I have a question unrelated to this thread. Problem is I'm not able to because you disabled your PM.
Title: Controlled testing
Post by: mzil on 2013-02-20 19:03:41
I have temporarily engaged PMs. Try it now.
Title: Controlled testing
Post by: phofman on 2013-02-20 19:18:30
Blind tests for different equipment are a bit more complicated, but blind tests for different software setup are trivial.  Yet I have never seen any ABX reports from those  claiming loudly that minimum latency/ram timing/keeping everything in CPU cache/playback processes must have max realtime priority/ etc. matters.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-21 14:43:14
Blind tests for different equipment are a bit more complicated, but blind tests for different software setup are trivial.  Yet I have never seen any ABX reports from those  claiming loudly that minimum latency/ram timing/keeping everything in CPU cache/playback processes must have max realtime priority/ etc. matters.


Don't underestimate the pervasiveness of desiring to be affirmed! ;-)

If I'm going to do all that work, at least I want to be told I'm right! ;-)
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-22 17:06:23
That makes complete sense! But if you are comparing two amps with different volume knobs and you can't adjust the volume in 0.2 dB or 0.1 increments then all bets are off?


You can of course feed one of them a 0.2 dB louder signal, but you can also bet your backside that someone will complain that the reason they don't hear any difference between their outrageously expensive component and an off-the-shelf product, is that you have allegedly run the precious signal through a meatgrinder which renders everything equal


That is a quick summary of every complaint about the results or procedures of audio DBT's that I've ever heard! ;-)

As they say, denial ain't just a river in Egypt.
Title: Controlled testing
Post by: Yahzi on 2013-02-23 08:14:43
If one does not have an ABX comparator then the test results won't be properly controlled but ... semi-controlled? You can level match, but if you can't switch quickly... or quick enough then I assume the results, while not completely useless, are not reliable to a significant enough degree. Have I got that right?
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-23 19:18:35
If one does not have an ABX comparator then the test results won't be properly controlled but ... semi-controlled? You can level match, but if you can't switch quickly... or quick enough then I assume the results, while not completely useless, are not reliable to a significant enough degree. Have I got that right?



If you want the most sensitive results, you have to have some kind of fast switching available. Some authorities also want a transient-free switch which involves modulating the signal. Putting the actual switching under the control of the listener also maximizes sensitivity.

There are a goodly number of different ways to run a double blind test, of which ABX as we do it in audio, is just one. So ABX is not the one right way. There are many options that are valid and many of them give comparable results.
Title: Controlled testing
Post by: Yahzi on 2013-02-24 07:54:31
I'm always trying to be introspective about what we hear and why we hear it, but looking at the other position - of audible differences - it doesn't always look like their case is incredible. Some of the DBTs on the site showed positive results, some of the negative results were then debunked.

Some of the links are broken. Why aren't there enough convincing DBTs on these things? Like CDPs ... and DACs... and amplifiers? Give us real ammo to work with. I assume this site doesn't contain all DBT's published online, surely? I'm skeptical of any position sans supporting evidence and although I'm steered into thinking that controlled tests should result in a null result, given perceptual research and the testing methodologies involved, the actual DBT research doesn't seem very comprehensive ... online. I mean, if the bulk of it is offline, you know what the usual counter-argument will be - why should they believe it.

Think about it from their vantage point. Are these tests compiled somewhere ... in a deep vault?  I guess what I'm trying to say is that *I* want proper ammo to use when it is necessary. No, they most likely won't undergo testing themselves, so there must be published evidence to a degree that cannot be overlooked or denied, but I don't see that overwhelmning ammo - all I see are some null .. some positive, some null ... some faulty tests etc.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-24 13:23:17
I'm always trying to be introspective about what we hear and why we hear it, but looking at the other position - of audible differences - it doesn't always look like their case is incredible. Some of the DBTs on the site showed positive results, some of the negative results were then debunked.


That's how Science works - nothing about it is totally consistent. ;-)

Quote
Some of the links are broken. Why aren't there enough convincing DBTs on these things? Like CDPs ... and DACs... and amplifiers?


Some of the best info is copyrighted and protected from free distribution.  That's the nature of life - really good articles are managed so as to produce revenue for those who control them.

Quote
Give us real ammo to work with.


The operative word appears to be give, which ordinarily implies altruism on the part of the provider and an exploitative situation with the person making the demands.

Quote
I assume this site doesn't contain all DBT's published online, surely?


Of course not.


Quote
I'm skeptical of any position sans supporting evidence


And the opposite view has exactly what?

Quote
I'm steered into thinking that controlled tests should result in a null result, given perceptual research and the testing methodologies involved, the actual DBT research doesn't seem very comprehensive ... online.


I've been down this road for about 40 years. "The test results aren't convincing to me because...  Not the latest, hobby-horse equipment, people, testing environment. program material done by my personal hero golden-eared reviewer, nicely formatted and for free." 

I know of no comparable context where the job of researcher is any easier than it is for audio.


Title: Controlled testing
Post by: Yahzi on 2013-02-24 15:01:16
Quote
That's how Science works - nothing about it is totally consistent. ;-)


So you accept that sighted listening (excluding speakers) may or may not result in a null difference under controlled testing? Or is your view that it's an objective truth and one could apply that claim globally?

Quote
The operative word appears to be give, which ordinarily implies altruism on the part of the provider and an exploitative situation with the person making the demands.


Perhaps you did not take what I said in the spirit that I intended. What I meant was, it would be great to have a compiled list of CDP .. or amp or cable tests to serve as ammunition in these arguments - to have convincing evidence. I certainly would love to have it because the other position could just point fingers and demand credible evidence which, in all fairness, is not an unreasonable position to hold.

Quote
And the opposite view has exactly what?


Well if you look at this objectively the opposing view have at least a few positive DBT results under their belts.  So even *if* you find more negative than positive, it's not a closed case. Is it? I mean, is there  statistical evidence that this is the case? 

I've heard on a number of forums where proponents of DBT will say something to the effect of "well, show me a single positive DBT of a CDP" or something like that. Or show me a positive result of speaker cable ..or amplifiers ... etc. If at least a few positive tests exist (excluding speakers here) for everything else then what does that mean?
Title: Controlled testing
Post by: Yahzi on 2013-02-24 16:31:30
Correct me if I'm wrong but I read an article by Sean Olive where (if I'm understanding what he says) he says that before 1994 there were no published scientific studies supporting DBT.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-24 17:36:45
Correct me if I'm wrong but I read an article by Sean Olive where (if I'm understanding what he says) he says that before 1994 there were no published scientific studies supporting DBT.


http://www.aes.org/e-lib/browse.cfm?elib=3839 (http://www.aes.org/e-lib/browse.cfm?elib=3839)

High-Resolution Subjective Testing Using a Double-Blind Comparator

(JAES paper - peer reviewed, etc.)

"A system for the practical implementation of double-blind audibility tests is described. The controller is a self-contained unit, designed to provide setup and operational convenience while giving the user maximum sensitivity to detect differences. Standards for response matching and other controls are suggested as well as statistical methods of evaluating data. Test results to date are summarized."

The nearly identical AES conference paper was given the previous year in 1981.

I am informed that if one follows the Journal of the Acoustical Society of America, one would find papers describing earlier controlled testing relating to hearing. BTW the hearing folks use something they call an ABX test which is a little different from the one described above.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-24 17:49:13
Quote
That's how Science works - nothing about it is totally consistent. ;-)


So you accept that sighted listening (excluding speakers) may or may not result in a null difference under controlled testing?


Of course. Sighted listening can have random results due to one or more of the many uncontrolled influences. Doesn't happen very often because the strongest influences usually produce consistent results.

Quote
Quote
The operative word appears to be give, which ordinarily implies altruism on the part of the provider and an exploitative situation with the person making the demands.


Perhaps you did not take what I said in the spirit that I intended. What I meant was, it would be great to have a compiled list of CDP .. or amp or cable tests to serve as ammunition in these arguments - to have convincing evidence.


Such lists exist. A little use of google turns them up.

Quote
I've heard on a number of forums where proponents of DBT will say something to the effect of "well, show me a single positive DBT of a CDP" or something like that. Or show me a positive result of speaker cable ..or amplifiers ... etc. If at least a few positive tests exist (excluding speakers here) for everything else then what does that mean?


I know how to do a positive ABX test involving a CD player. In fact one is described in this article: Masters, Ian G 'Do All CD Players Sound the Same?' Stereo review, Jan 1986, pg 50-57. I was part of the listening panel and I reviewed the test setup and approved of it.  The player had  measurable faults that were obvious enough to make an audible  difference with very critical program material.
Title: Controlled testing
Post by: DonP on 2013-02-24 18:12:49
If you want the most sensitive results, you have to have some kind of fast switching available. Some authorities also want a transient-free switch which involves modulating the signal. Putting the actual switching under the control of the listener also maximizes sensitivity.


If what you are testing is someone's claim that he can tell a difference in the sound of 2 components auditioned on different days in different buildings, then you don't need fast switching.



Title: Controlled testing
Post by: mzil on 2013-02-24 18:35:08
[What I meant was, it would be great to have a compiled list of CDP .. or amp or cable tests to serve as ammunition in these arguments - to have convincing evidence


Have you already seen this (http://home.provide.net/~djcarlst/abx_peri.htm) and this (http://www.hydrogenaudio.org/forums/index.php?showtopic=82777)?
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-24 19:07:07
If you want the most sensitive results, you have to have some kind of fast switching available. Some authorities also want a transient-free switch which involves modulating the signal. Putting the actual switching under the control of the listener also maximizes sensitivity.


If what you are testing is someone's claim that he can tell a difference in the sound of 2 components auditioned on different days in different buildings, then you don't need fast switching.


The above ignores a clearly stated condition: "If you want the most sensitive results".  If you audition on different days in different buildings then you don't want the most sensitive results.
Title: Controlled testing
Post by: greynol on 2013-02-24 19:22:02
Agreed, although that doesn't help to address the placebophile argument that X is less fatiguing than Y or is more beneficial to the mental, psychological and/or physical health of the listener (yes, still confined to the realm of listening to audio).

I guess it doesn't matter since this generally boils down to metaphysical beliefs, even if the individual espousing them insists otherwise.
Title: Controlled testing
Post by: Yahzi on 2013-02-24 19:40:29
Quote
Have you already seen this and this?


I saw on this site many of the DBTs are in French .. some links don't work and all the tests on the AES or Stereo Review can't be accessed. I see one CDP DBT with a failed link ... and the rest I see are in French. One failed, one even was a success ...
Title: Controlled testing
Post by: Yahzi on 2013-02-24 20:34:27
If the DBT results were published at the AES does that make them peer-reviewed?
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-25 13:41:00
If the DBT results were published at the AES does that make them peer-reviewed?


Everything published in the JAES as a technical paper is AFAIK still heartily peer-reviewed.

Almost anything can be said in a conference paper and they usually get published separately from the Journal.

There are a few JAES articles that some sophisticated and influential members would like to peer review err, retroactively.  Many of them relate to exceptional claims of the audibility of Transient Intermodulation or Slew Induced distortion.

Doing your own DBTs isn't exceptionally difficult nor does it take a lot of rare resources if you are willing to be pragmatic.
Title: Controlled testing
Post by: mzil on 2013-02-25 19:34:26
^To add to that I'd like to say that having done it myself (a layman without any schooling in these disciplines), after I made the wager with my my golden-eared audiophile friend, I felt vindicated in my beliefs and now when asked, "But have you ever conducted such carefully controlled tests yourself?" I can proudly respond in the affirmative. 

It is quite empowering Yahzi. Give it a try!

Title: Controlled testing
Post by: Yahzi on 2013-02-25 20:18:19
Quote
It is quite empowering Yahzi. Give it a try!


I want to! But first I need to carefully study the necessary steps involved.    Without an ABX comparator I would be at a disadvantage .. I would imagine. Although I meant what I said, that I would love to meet some of you guys in the near future and perhaps learn more about the testing in person and even be tested myself.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-26 01:01:27
Quote
It is quite empowering Yahzi. Give it a try!


I want to! But first I need to carefully study the necessary steps involved.    Without an ABX comparator I would be at a disadvantage .. I would imagine. Although I meant what I said, that I would love to meet some of you guys in the near future and perhaps learn more about the testing in person and even be tested myself.


Just about everybody can have an ABX Comparator on their computer - they exist as free downloads for just about every kind of computer that runs Java.

Many things of interest including just about everything related to lossy compression can be tested this way.

While it takes a step of faith in the quality of the converters that can be attached to computers, most issues that relate to hardware can also be recast as files for a software ABX  Comparator.

Here is an archive of files for use with a software ABX Comparagtor:

http://www.ethanwiner.com/aes/ (http://www.ethanwiner.com/aes/)
Title: Controlled testing
Post by: Yahzi on 2013-02-28 11:52:05
Arny, I noticed on occasion you bring up time synchronisation. Can you please explain what that entails and the importance of doing it? How does one time synch and if you don't do this, would it invalidate the blind or double blind test results?
Title: Controlled testing
Post by: pdq on 2013-02-28 14:29:06
It's not that complicated. When you switch rapidly between two audio streams, if they are not exactly synchronized then you will hear a little "hiccup" from either a few samples being repeated, or a few being skipped, also probably a slight click as you switch. This tells you if X is matched to A or B, even if A and B are identical except for the time mismatch.
Title: Controlled testing
Post by: Yahzi on 2013-02-28 15:45:12
Okay, but if the streams are not synched ... that will invalidate the test results? I understand quick switching ... but I've seen many tests that are not time synched, so I'm just asking here whether that would be enough to throw the results in the air.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-02-28 19:37:16
Okay, but if the streams are not synched ... that will invalidate the test results? I understand quick switching ... but I've seen many tests that are not time synched, so I'm just asking here whether that would be enough to throw the results in the air.


If you don't synchronize the streams it is possible to correctly identify them, even if they are otherwise identical.

The test isn't really blind.
Title: Controlled testing
Post by: Yahzi on 2013-03-15 09:22:54
So double blind testing only really works if you are experienced in it, i.e. having trained yourself to do it, knowing what to listen for. So if you lure inexperienced DBT 'testers' into the test you could invalidate the results that way. Surely?
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-03-15 19:28:35
So double blind testing only really works if you are experienced in it, i.e. having trained yourself to do it, knowing what to listen for. So if you lure inexperienced DBT 'testers' into the test you could invalidate the results that way. Surely?


Obvious anti DBT bias noted. In fact listening tests of any kind only work if you have experienced listeners. In general sighted evaluations don't work because of the unresolved problems with listener bias.
Title: Controlled testing
Post by: Porcus on 2013-03-15 22:31:15
So double blind testing only really works if you are experienced in it, i.e. having trained yourself to do it, knowing what to listen for. So if you lure inexperienced DBT 'testers' into the test you could invalidate the results that way. Surely?


With 'testers', do you mean the person(s) who listen, or the person(s) who administer the test and collect the assessments from the former?

If you 'lure' someone inexperienced with listening into a listening test, then they might not spot differences they would if they were trained, but that is the inexperience with the 'T' part, not the 'DB' part.

If you 'lure' someone inexperienced (... dare I say clueless?) into setting up a DBT, that person may of course invalidate the test by making errors an experienced tester would avoid. Those errors could be related to double-blind framework (for example, not everyone knows what that 'double' is about), or unrelated (e.g. not matching volume, and making the wrong interpretations).
Title: Controlled testing
Post by: krabapple on 2013-03-16 04:38:51
So double blind testing only really works if you are experienced in it, i.e. having trained yourself to do it, knowing what to listen for. So if you lure inexperienced DBT 'testers' into the test you could invalidate the results that way. Surely?


If a person already claims to hear a difference between A and B,  a blind test  of that person is an excellent way to test them on that claim.

If you are a researcher and want to determine if humans could hear a difference between A and B, then training is part of the experimental protocol.

Surely you  are being purposely obtuse?
Title: Controlled testing
Post by: Yahzi on 2013-03-25 16:30:11
I was just being curious and sometimes curiosity gets the better of me.  I've been told on several occasions that human auditory memory is very short. Are there any studies I can look at that can confirm this? Thanks.
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-03-25 19:07:23
I was just being curious and sometimes curiosity gets the better of me.  I've been told on several occasions that human auditory memory is very short. Are there any studies I can look at that can confirm this? Thanks.


The topic is discussed pretty thoroughly with extensive footnotes in This Is Your Brain On Music by  Daniel Levitin

http://www.amazon.com/This-Your-Brain-Musi...n/dp/0452288525 (http://www.amazon.com/This-Your-Brain-Music-Obsession/dp/0452288525)

The short answer is if the goal is to remember small subtle details, probably a second or less.  Obviously, gross details such as the name of the song or whether we liked the performance can be recalled for months or years.

I built an ABX Comparator with an adjustable delay while switching. In general my ability to distinguish small differences becomes degraded when this delay is much more than  a second.
Title: Controlled testing
Post by: Yahzi on 2013-03-25 19:11:32
Thanks Arnold. Are there any other books or studies you would recommend? I've already ordered "This Is Your Brain On Music" but I would like to know if there are any papers, or research you could recommend I read in addition. Thanks again!
Title: Controlled testing
Post by: krabapple on 2013-03-26 18:03:03
see this thread

http://www.hydrogenaudio.org/forums/index.php?showtopic=7645 (http://www.hydrogenaudio.org/forums/index.php?showtopic=7645)
Title: Controlled testing
Post by: Yahzi on 2013-03-26 19:47:04
Yeah I saw the thread but I see a dozen different books. How do I know that there is anything of value specifically concerning auditory memory in there? I have no idea. That's why I'm asking so I can be pointed in the right direction. I can't order all these books you know. 
Title: Controlled testing
Post by: Yahzi on 2013-03-27 09:38:44
Is the type of auditory memory comparing two sounds over time called echoic memory or if not, what type of memory is that? I read that echoic memory lasts 2-5 seconds. But I'm not sure if that is the same thing as comparing adjacent sounds in time.

Can anyone clarify this?
Title: Controlled testing
Post by: Arnold B. Krueger on 2013-03-27 14:54:40
Is the type of auditory memory comparing two sounds over time called echoic memory or if not, what type of memory is that? I read that echoic memory lasts 2-5 seconds. But I'm not sure if that is the same thing as comparing adjacent sounds in time.

Can anyone clarify this?


Never heard the term before, but clearly it is widely accepted in the study of human perception.

Using http://en.wikipedia.org/wiki/Echoic_memory (http://en.wikipedia.org/wiki/Echoic_memory)  as my guide, I would say that echoic memory refers to something that has less detail and longer duration than the kind of memory involved with hearing subtle differences near the threshold of hearing.
Title: Controlled testing
Post by: Jplus on 2013-04-06 11:07:22
Using http://en.wikipedia.org/wiki/Echoic_memory (http://en.wikipedia.org/wiki/Echoic_memory)  as my guide, I would say that echoic memory refers to something that has less detail and longer duration than the kind of memory involved with hearing subtle differences near the threshold of hearing.

According to Carrol, Psychology of Language 4th edition, the auditory sensory store, which temporarily retains the impressions "in a raw, unanalyzed form", lasts for about 4 seconds. So that seems to suggest there isn't another even more detailed stage of memory preceding it. Though I wouldn't be surprised if there's still some kind of compression or filtering (or just degradation) going on during those 4 seconds, which could definitely explain why you can recall more details if you heard them less than a second ago.

Fun fact: the visual sensory store lasts only about 1 second.
Title: Controlled testing
Post by: Porcus on 2013-04-06 11:30:34
Fun fact: the visual sensory store lasts only about 1 second.


So that “store” is something else than where we “store” afterimages? http://en.wikipedia.org/wiki/Afterimage (http://en.wikipedia.org/wiki/Afterimage)
Title: Controlled testing
Post by: Kees de Visser on 2013-04-06 13:43:38
Fun fact: the visual sensory store lasts only about 1 second.
I recently found this site by accident. The information looks rather reliable but I'm not qualified to judge, so read with care
http://webspace.ship.edu/cgboer/memory.html (http://webspace.ship.edu/cgboer/memory.html)