Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Understanding ABX Test Confidence Statistics (Read 53149 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Understanding ABX Test Confidence Statistics

Reply #75
This is as ridiculous a statement as I have heard & shows your lack of knowledge & understanding - there's no such thing as a false negative in sighted tests

I do not understand this at all. If you have a sighted test where you believe the two items under test ought to sound identical (but in fact they don't) - surely that could quite easily generate false negatives?

Again, a complete fail - if you "believe" that there's no audible difference but you hear a difference (because one exists) how could that possibly be a false negative, doh? Do you guys think logically or ......?


I'm talking about the possibility of a situation where there is very subtle difference - perhaps on the edge of perceptibility - that would be possible to notice when listening very carefully but where your bias (subconsciously or otherwise) lowers the effort you put in and you therefore don't notice it, or notice it less consistently.

I can't see any evidence that that's impossible in either sighted or blind tests, although I suspect it would be orders of magnitude less likely than the opposite bias in a sighted test.

Understanding ABX Test Confidence Statistics

Reply #76
The specificity of the test is what is being discussed & as a result, the reliability of the test

Why do you fail to understand that in an online ABX test this depends only on the honesty of the persons involved?
For example if the guys doing the test with the AIX files were honest, they would have immediately taken down the test after noticing the easily audible time delays...
If amir was honest he would have answered simple questions instead of dancing around for several pages and finally making a lame excuse...


Are you talking about pre-testing using anchors or hidden anchors within the test procedure?

Google: multiformat listening test


If we included false positive controls in sighted listening tests then they would also give us more information about the specificity of that listening test - how prone it was to type I errors. So it does work both ways & would be useful in both tests.

Nope.
Also, send me any online ABX test and the results you want and I will very likely be able to produce them within a few minutes, regardless of how many "traps" you implement.


However, you are happy to shout that sighted tests produce false positives & not to be trusted.
I'm saying the same could be true about blind tests - they produce false negatives - we simply don't know.

Do you seriously still not get it, or do you just act stupid on purpose now?

There's however one thing you seem to accept, that sighted tests produce countless false positives. Hence they are completely useless for our intent.


It's not about whether a null result proves inaudibility or not - it's about why would anyone bother to run a test that was flawed in this way.
This is the scientific discussion section, right? Let's have some scientific discussion then - is a blind test of any value? How do we know? Why bother with one if it's prone to type II errors?

Again, and for the last time,
a person interested whether he/she can hear differences will put effort into the test. An ABX test as such is a tool for that person to get statistical validation of what that person thinks he/she heard in a sighted comparison.
Null results are null results. To support the alternative hypotheses you need positive results.

What we are interested in are true positives, removing bias to avoid false positives with a double blind test protocol.

That ABX/DBTs work can easily be shown with low anchors, or pointing at the countless positive tests.
"I hear it when I see it."

Understanding ABX Test Confidence Statistics

Reply #77
I'm talking about the possibility of a situation where there is very subtle difference - perhaps on the edge of perceptibility - that would be possible to notice when listening very carefully but where your bias (subconsciously or otherwise) lowers the effort you put in and you therefore don't notice it, or notice it less consistently.


These are the kinds of influences that are dealt with during listener training, and also during the test itself.

For example, when I disqualified myself as a listener, it was because I had absolutely and completely failed a training exercise that had been very easy for me to ace in recent times like this summer.

ABX makes this sort of thing abundantly clear.  Note that I explained this to JKeny and associates at the time, and look how that situation has been spun, even by HA regulars.

In general, observation of the test, and analysis of the scores can be used to develop insights into the possible non-audio causes of variations in listener sensitivity.

I know of  no perfect test, and no test that can't be consciously or unconsciously gamed.

A subtle difference on the edge of audibility has a lot of natural sources of variation. A simple common respiratory infection (IOW a common cold) can make all of the difference in the world.

This is no different than any other form of human endeavor. A theatrical group I work with does various warm up exercises before rehearsals and shows. How that casual simple warm up exercise works out often predicts quite a bit about the rest of the night. If nothing else it puts people on notice about their night's potential for a good well-coordinated performance.

Understanding ABX Test Confidence Statistics

Reply #78
You are so hung up on the cheating to get positive results - why does ArnyK's cheating to get a null result not also apply - he didn't listen, just randomly hit keys?


There was no cheating or hitting of random keys.  There was an objection to the rate at which the unknowns were listened to, but when I can hear differences I often obtain positive results while working at that speed.
Rubbish - nobody can listen, decide & use a mouse to select a button in 1 second which is what you took for how many of your trials. Or 2 seconds - how many trials did you do at 2 seconds each? 3 seconds 4 seconds. You just didn't listen & you admitted as much on AVS 


Just a second. You now claim it is impossible to conduct a trial in 1 second, and then claim that I posted an ABX log showing that I actually did such a thing?  There's an obvious inconsistency here!

If there is an ABX log showing that that I did such a thing,  then it is obviously possible!

Given the very many false claims I've seen here lately, I'd like to see evidence of that claim. I'm blocked from AVS so I can't get the evidence myself.

Try to follow, Arny, it was your test, after all so you should be familiar with it.
You tried to pass off a 1 second duration for a number of your trials as a real trial when in fact it was just you hitting a random button - no listening attempted, couldn't possibly happen in 1 sec, nor 2, 3 or 4 seconds. Your attempts at extricating yourself from the hole you have dug is just embarrassing. 

If you want to try to explain in detail all the steps you took in a listening trial which took 1 second duration in total for he trial

Understanding ABX Test Confidence Statistics

Reply #79
look how that situation has been spun [...] by HA regulars.

 

Are you going out of your way to support my earlier point?

Understanding ABX Test Confidence Statistics

Reply #80
in fact it was just you hitting a random button - no listening attempted

<italics used for emphasis: mine>

...and there you go again.  Do you have any demonstrable proof besides hollow anecdotes?

I didn't think so!

Understanding ABX Test Confidence Statistics

Reply #81
jkeny, why not take the bitching to PMs? Nobody is interested in it.
"I hear it when I see it."

Understanding ABX Test Confidence Statistics

Reply #82
Anyway, I see I'm wasting my time expecting any logic or scientific discussion of a test.
This topic has been brought up before & it appears nothing has been learned r will be learned

Post 2010
Quote
You can't rule out bias towards false negatives because they "seem" like they wouldn't happen all that often because such tests take effort. How on earth does effort eliminate bias? That is utterly unscientific. If you want to rule out such a bias you have to include a control for it in the test. Something that is pretty darned easy. All you need is a series Cs to go with your A and B when doing ABX that offer a meaningful variety of known barely audible differences. Without testing the test with known audible differences that are approaching the established thresholds of human hearing how can one make any determination about the sensitivity of any given ABX test?


Even have a recent (Jan 2015) post from Arny
Quote
The corresponding problem is that blind tests deal with this problem of false positives very effectively, but can easily produce false negatives.

There are effective systematic ways to avoid false negatives. The methodology I favor involves creating a series of musical samples that have technical flaws of the kind that we are trying to detect, but start out with the flaw at such a level that almost everybody will hear it. There is a sequence of musical samples that contain the flaw at decreasing levels, right down to real-world levels characteristic of the situation at hand.

Then if you have a listener that can't hear the problem when it is large, you don't waste time trying to get him to hear it reliably when it is small. Exactly what constitutes large and small may not be known initially, but will come to light as a natural by product of the testing procedure and analysis of data when it is applied to a number of listeners.
He admits to the fact that blind tests can easily produce false negatives but then goes on to outline his methodology to avoid false negatives which is a pre-screening test & is nothing to do with determining the level of false negatives during the test. All the pre-screening does is show the test setup & listener is capable of hearing a certain level of impairment.

And again he restates the same mistake but at least states that we should be concerned with false negatives in ABX tests - not the message that he is giving here - funny, this capriciousness
Quote
ABX is well known and generally agreed upon to be highly resistant to false positives. So, we only have to worry about false negatives affecting our experiments based on ABX. False positives are systematically made into mission impossible by ABX.

One way to avoid false negatives is to pre select the listeners so that they are balanced or even personally biased to be overwhelmingly in favor of positive outcomes. This has been done many times.

Understanding ABX Test Confidence Statistics

Reply #83
in fact it was just you hitting a random button - no listening attempted

<italics used for emphasis: mine>

...and there you go again.  Do you have any demonstrable proof besides hollow anecdotes?

I didn't think so!

The evidence is in the timing of the trials which are recorded in the log & plain to see. As I asked, Arny, I'll ask you - please explain how to actually listen to two samples & select the XisB or YisB button in 1 second. Arny did 3 trials - each taking 1 second.

Maybe you could tell us what you think the minimum timing is for a valid trial - you know one where the test subject actually tries to hear a identify the sample (hint listening is involved)

Understanding ABX Test Confidence Statistics

Reply #84
These are the kinds of influences that are dealt with during listener training, and also during the test itself.

For example, when I disqualified myself as a listener, it was because I had absolutely and completely failed a training exercise that had been very easy for me to ace in recent times like this summer.

ABX makes this sort of thing abundantly clear.  Note that I explained this to JKeny and associates at the time, and look how that situation has been spun, even by HA regulars.

In general, observation of the test, and analysis of the scores can be used to develop insights into the possible non-audio causes of variations in listener sensitivity.

I know of  no perfect test, and no test that can't be consciously or unconsciously gamed.

A subtle difference on the edge of audibility has a lot of natural sources of variation. A simple common respiratory infection (IOW a common cold) can make all of the difference in the world.

This is no different than any other form of human endeavor. A theatrical group I work with does various warm up exercises before rehearsals and shows. How that casual simple warm up exercise works out often predicts quite a bit about the rest of the night. If nothing else it puts people on notice about their night's potential for a good well-coordinated performance.


Absolutely, I do a little work with music and I've had to postpone jobs for days because a cold has simply been prevented from hearing enough detail to make good decisions. And apart from direct physical effects, I've semi-consistently noticed a desire to turn levels up when working immediately after a large meal, and often had difficulty concentrating.

Human hearing and psychology truly is a multi-faceted thing and I guess that understanding that is the first key to designing out external factors and preconception as much as possible, to design tests that get more reliable, repeatable data.

I agree that changing from a sighted to a (double-)blind procedure is probably the single most effective way of significantly reducing the impact of nearly every type of bias.

I'm not really trying to make a broader point, I'm just curious as to why anyone would come to the conclusion that false negatives are impossible in a sighted test, because I can't conceive of a testing format that removed them entirely, let alone one where biases are as unavoidable as a sighted test.

Understanding ABX Test Confidence Statistics

Reply #85
[...]

I've taken as little as a second to enter a choice.

As has been explained over and over again, results that fail to disprove the null hypothesis are not proof of the null hypothesis.  Other than tormenting Arny with his failed test, you aren't accomplishing anything.  You're definitely not doing anything to persuade anyone that we should adopt your idiotic notion of what should constitute proper scientific discovery, either.

But alas, I don't think you're an idiot, rather just another intellectually dishonest person with a financial interest in obscuring the truth from people who largely are idiots.

Understanding ABX Test Confidence Statistics

Reply #86
I've never thought of it as a race before, but out of curiosity, trying just moments ago, and putting my mind to  listening to an ABX as fast as I possibly could, yet listening for the sound differences and voting accordingly, not just randomly guessing, I can pass an ABX at about two seconds per trial, on average, in fact one trial I even did in just one second plus I'm confident I could do *much* better getting many one second responses (with a little practice) not that I plan on proving that.

Fast results do *not* prove random guessing. Scroll down to trial 18, or so, to see my one second trial:

Code: [Select]
foo_abx 2.0 report
foobar2000 v1.3.3
2015-02-02 14:04:53

File A: Let's Stay Together.mp3
SHA1: 8ad8471870c0082dd8e8fcbfb3408e60a3c9cf79
File B: Eleanor Rigby [Strings Only, Take 14].mp3
SHA1: ea9d76e58ba1249e41a64d11a5ae23006de8850c

Output:
DS : Primary Sound Driver
Crossfading: NO

14:04:53 : Test started.
14:05:02 : 01/01
14:05:05 : 02/02
14:05:07 : 03/03
14:05:09 : 04/04
14:05:11 : 04/05
14:05:14 : 05/06
14:05:16 : 06/07
14:05:18 : 07/08
14:05:21 : 08/09
14:05:23 : 09/10
14:05:26 : 10/11
14:05:28 : 11/12
14:05:30 : 12/13
14:05:32 : 13/14
14:05:34 : 14/15
14:05:36 : 15/16
14:05:38 : 16/17
14:05:41 : 17/18
14:05:42 : 18/19
14:05:44 : 19/20
14:05:44 : Test finished.

 ----------
Total: 19/20
Probability that you were guessing: 0.0%

 -- signature --
bdbe749ab482aec460099688284ad0b48e076251
http://www.foobar2000.org/abx/signaturecheck

Understanding ABX Test Confidence Statistics

Reply #87
But alas, I don't think you're an idiot, rather just another intellectually dishonest person with a financial interest in obscuring the truth from people who largely are idiots.

As my dear friend Amir would say, John understands the "business" side of things and "ABX" is not good for the business of Biochemically engineered DACs, $50k amps, etc, etc.
"ABX" is the dog whistle for "blind" to the D-K gang....and we know how much those who "trust their ears" love to peek.
ITU standards blind tests of Biochemically engineered DACs won't be forthcoming any time soon. 

cheers,

AJ
Loudspeaker manufacturer

Understanding ABX Test Confidence Statistics

Reply #88
[...]

I've taken as little as a second to enter a choice.
After listening to try to discriminate between tracks - rubbish (unless the tracks are so different that you don't need a blind test whatsoever). Tell us the two tracks/samples you used for the 1 second test & we will see if these are tracks that only take a second to discriminate between them. 

Quote
As has been explained over and over again, results that fail to disprove the null hypothesis are not proof of the null hypothesis.  Other than tormenting Arny with his failed test, you aren't accomplishing anything.  You're definitely not doing anything to persuade anyone that we should adopt your idiotic notion of what should constitute proper scientific discovery, either.

But alas, I don't think you're an idiot, rather just another intellectually dishonest person with a financial interest in obscuring the truth from people who largely are idiots.

If you want to prove yourself NOT to be intellectually dishonest, then tell us the tracks above

Understanding ABX Test Confidence Statistics

Reply #89
I've never thought of it as a race before, but out of curiosity, trying just moments ago, and putting my mind to  listening to an ABX as fast as I possibly could, yet listening for the sound differences and voting accordingly, not just randomly guessing, I can pass an ABX at about two seconds per trial, on average, in fact one trial I even did in just one second plus I'm confident I could do *much* better getting many one second responses (with a little practice) not that I plan on proving that.

Fast results do *not* prove random guessing. Scroll down to trial 18, or so, to see my one second trial:

Code: [Select]
foo_abx 2.0 report
foobar2000 v1.3.3
2015-02-02 14:04:53

File A: Let's Stay Together.mp3
SHA1: 8ad8471870c0082dd8e8fcbfb3408e60a3c9cf79
File B: Eleanor Rigby [Strings Only, Take 14].mp3
SHA1: ea9d76e58ba1249e41a64d11a5ae23006de8850c

Output:
DS : Primary Sound Driver
Crossfading: NO

14:04:53 : Test started.
14:05:02 : 01/01
14:05:05 : 02/02
14:05:07 : 03/03
14:05:09 : 04/04
14:05:11 : 04/05
14:05:14 : 05/06
14:05:16 : 06/07
14:05:18 : 07/08
14:05:21 : 08/09
14:05:23 : 09/10
14:05:26 : 10/11
14:05:28 : 11/12
14:05:30 : 12/13
14:05:32 : 13/14
14:05:34 : 14/15
14:05:36 : 15/16
14:05:38 : 16/17
14:05:41 : 17/18
14:05:42 : 18/19
14:05:44 : 19/20
14:05:44 : Test finished.

 ----------
Total: 19/20
Probability that you were guessing: 0.0%

 -- signature --
bdbe749ab482aec460099688284ad0b48e076251
http://www.foobar2000.org/abx/signaturecheck

I call this more intellectual dishonesty - the two tracks are of different songs, FFS - are you serious!!

Understanding ABX Test Confidence Statistics

Reply #90
The two songs are easy to tell apart, yes, so? How is posting the names of the songs I used, for everyone to see, being intellectually dishonest? Whenever A and B are easy to ID, for whatever reason, they are easy to vote on, no? My point was that a successful ABX test, with true listening, can be done in one or two seconds per trial. Did I say that's true always? NO! The claim any test which is fast is automatically just randomly entered is false.

Understanding ABX Test Confidence Statistics

Reply #91
These are the kinds of influences that are dealt with during listener training, and also during the test itself.

For example, when I disqualified myself as a listener, it was because I had absolutely and completely failed a training exercise that had been very easy for me to ace in recent times like this summer.

ABX makes this sort of thing abundantly clear.  Note that I explained this to JKeny and associates at the time, and look how that situation has been spun, even by HA regulars.

In general, observation of the test, and analysis of the scores can be used to develop insights into the possible non-audio causes of variations in listener sensitivity.

I know of  no perfect test, and no test that can't be consciously or unconsciously gamed.

A subtle difference on the edge of audibility has a lot of natural sources of variation. A simple common respiratory infection (IOW a common cold) can make all of the difference in the world.

This is no different than any other form of human endeavor. A theatrical group I work with does various warm up exercises before rehearsals and shows. How that casual simple warm up exercise works out often predicts quite a bit about the rest of the night. If nothing else it puts people on notice about their night's potential for a good well-coordinated performance.


Absolutely, I do a little work with music and I've had to postpone jobs for days because a cold has simply been prevented from hearing enough detail to make good decisions. And apart from direct physical effects, I've semi-consistently noticed a desire to turn levels up when working immediately after a large meal, and often had difficulty concentrating.

Human hearing and psychology truly is a multi-faceted thing and I guess that understanding that is the first key to designing out external factors and preconception as much as possible, to design tests that get more reliable, repeatable data.

I agree that changing from a sighted to a (double-)blind procedure is probably the single most effective way of significantly reducing the impact of nearly every type of bias.

I'm not really trying to make a broader point, I'm just curious as to why anyone would come to the conclusion that false negatives are impossible in a sighted test, because I can't conceive of a testing format that removed them entirely, let alone one where biases are as unavoidable as a sighted test.

As I said, pre-screening does not answer the issues but go ahead & deny that a problem exists
Although on others threads, it's admitted that it is an issue by none other than Arny & others

Understanding ABX Test Confidence Statistics

Reply #92
The two songs are easy to tell apart, yes, so? Whenever A and B are easy to ID, they are easy to vote on, no?

More intellectual dishonesty from mzil

ArnyK's test in which he cheated, was of Flac Vs 256kbps mp3 of the following:
File A: 15 Haydn_ String Quartet In D, Op_ 76, No_ 5 - Finale - Presto + cues 256kbps.mp3
SHA1: f24d8c506ae5d38fd7d3a8e7700ee8595cd5e025
File B: 15 Haydn_ String Quartet In D, Op_ 76, No_ 5 - Finale - Presto + cues.wav
SHA1: 961320fa0baa1983130304bed02df943a32cfe25

I'm sure he will give you these files to run your test again but quit the dishonesty - it's unbecoming

Understanding ABX Test Confidence Statistics

Reply #93
You mean if it overcomes the massive skew towards producing false negatives & actually succeeds in discriminating two files/devices?

The alleged skew is your invention. You haven't shown any evidence that there is any. Even if there were, the fact that negative results don't count makes this a moot point.

Quote
There's a difference between an invalid test & a null result - you seem not to be aware of this

No, I just regard invalid tests as completely irrelevant.

Quote
Again, it's not a contest between sighted & blind listening - can you not get that into your head? It's a discussion on how invalid blind testing is - it's greatly skewed towards false negatives & hence towards a null result.  A test that is biased & only controls for type I errors is not worth bothering with - it's of no real value.

I rather fear that some other people might not get the irrelevance of sighted tests into their heads, so I'd rather repeat it once too often. People who shoot down blind tests with made up arguments often seem to want to make sighted tests look respectable in comparison. 

Understanding ABX Test Confidence Statistics

Reply #94
rubbish

I need only isolate the portion of a pair of level-matched, time-synched samples that produce a tell.

About how long does short-term auditory memory last (you know, the actual reliable kind)?

Understanding ABX Test Confidence Statistics

Reply #95
I call this more intellectual dishonesty - the two tracks are of different songs, FFS - are you serious!!


You remind me so much of amir, it is scary. Here's my first attempt (aiming for speed) using the broken AIX test files:
Code: [Select]
foo_abx 2.0 report
foobar2000 v1.3.7
2015-02-03 00:44:51

File A: Just_My_Imagination_A2.wav
SHA1: 2a913086b5e4c2fa052e643a2ad11c18ea598cff
File B: Just_My_Imagination_B2.wav
SHA1: 654dcecc9137b29f980d8d28fd63b5470b4695dd

Output:
DS : Primärer Soundtreiber
Crossfading: NO

00:44:51 : Test started.
00:44:58 : 01/01
00:44:59 : 02/02
00:45:01 : 03/03
00:45:02 : 04/04
00:45:04 : 05/05
00:45:05 : 06/06
00:45:06 : 07/07
00:45:07 : 08/08
00:45:08 : 09/09
00:45:09 : 10/10
00:45:11 : 11/11
00:45:12 : 12/12
00:45:13 : 13/13
00:45:15 : 14/14
00:45:16 : 15/15
00:45:17 : 16/16
00:45:18 : 16/17
00:45:19 : 17/18
00:45:21 : 18/19
00:45:22 : 19/20
00:45:22 : Test finished.

----------
Total: 19/20
Probability that you were guessing: 0.0%

-- signature --
d3e2e68f2e6b862ea4a5e9e0b56ff7962a91d0b3


All I did was set the start position accordingly, the rest is listening and clicking the right buttons. No cheating. I could easily do this faster by using hotkeys.

I'm not sure what you will make of this, since you seem a bit confused about all of this.
"I hear it when I see it."


Understanding ABX Test Confidence Statistics

Reply #97
You mean if it overcomes the massive skew towards producing false negatives & actually succeeds in discriminating two files/devices?

The alleged skew is your invention. You haven't shown any evidence that there is any. Even if there were, the fact that negative results don't count makes this a moot point.

Quote
There's a difference between an invalid test & a null result - you seem not to be aware of this

No, I just regard invalid tests as completely irrelevant.

Quote
Again, it's not a contest between sighted & blind listening - can you not get that into your head? It's a discussion on how invalid blind testing is - it's greatly skewed towards false negatives & hence towards a null result.  A test that is biased & only controls for type I errors is not worth bothering with - it's of no real value.

I rather fear that some other people might not get the irrelevance of sighted tests into their heads, so I'd rather repeat it once too often. People who shoot down blind tests with made up arguments often seem to want to make sighted tests look respectable in comparison. 

Again, you fail - I'm asking for more robust blind tests by addressing a known problem - what exactly is it you object to about this?
Your denial is in  disagreement with Arny who states "The corresponding problem is that blind tests deal with this problem of false positives very effectively, but can easily produce false negatives.

Understanding ABX Test Confidence Statistics

Reply #98
I call this more intellectual dishonesty - the two tracks are of different songs, FFS - are you serious!!


You remind me so much of amir, it is scary. Here's my first attempt (aiming for speed) using the broken AIX test files:
Code: [Select]
foo_abx 2.0 report
foobar2000 v1.3.7
2015-02-03 00:44:51

File A: Just_My_Imagination_A2.wav
SHA1: 2a913086b5e4c2fa052e643a2ad11c18ea598cff
File B: Just_My_Imagination_B2.wav
SHA1: 654dcecc9137b29f980d8d28fd63b5470b4695dd

Output:
DS : Primärer Soundtreiber
Crossfading: NO

00:44:51 : Test started.
00:44:58 : 01/01
00:44:59 : 02/02
00:45:01 : 03/03
00:45:02 : 04/04
00:45:04 : 05/05
00:45:05 : 06/06
00:45:06 : 07/07
00:45:07 : 08/08
00:45:08 : 09/09
00:45:09 : 10/10
00:45:11 : 11/11
00:45:12 : 12/12
00:45:13 : 13/13
00:45:15 : 14/14
00:45:16 : 15/15
00:45:17 : 16/16
00:45:18 : 16/17
00:45:19 : 17/18
00:45:21 : 18/19
00:45:22 : 19/20
00:45:22 : Test finished.

----------
Total: 19/20
Probability that you were guessing: 0.0%

-- signature --
d3e2e68f2e6b862ea4a5e9e0b56ff7962a91d0b3


All I did was set the start position accordingly, the rest is listening and clicking the right buttons. No cheating. I could easily do this faster by using hotkeys.

I'm not sure what you will make of this, since you seem a bit confused about all of this.


What is the small impairment difference between these two files? I'm not familiar with them

Understanding ABX Test Confidence Statistics

Reply #99
This topic has been brought up before & it appears nothing has been learned r will be learned


Maybe it takes another 5 years for you to get it?
"I hear it when I see it."