Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluation (Read 96207 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #150
Don't forget the jitter! Because apparently hi-res, DSD and MQA somehow lessen jitter as well.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #151
For the record, I have done dozens of DBTs and evaluated my own ability to differentiate between "cd-format" and "hi-res". It was null and it has been null for all the persons that I have seen testing it in the same way.

Hell, considering that most people have trouble differentiating between reasonable-bitrate VBR LAME MP3 and a CD-quality lossless source, or even a "hi-res" one, I simply don't understand why people can seriously expect to hear a difference between CD quality and "hi-res".

Its all about science, which includes giving one heck of a try to experiments whose outcome you may strongly expect to be null.

Besides, tests like these can be among the easiest of all DBTs to do, even by yourself. That is,  unless you get caught up in making your own hi rez recordings from scratch, which I did as well.

At the time I didn't trust the commercial so-called hi-rez products, and thus avoided the conundrum that Meyer and Moran found themselves in 5 years later, when they made  the mistake of believing the claims of a fraudulent segment of the audio industry.

The storm of undeserved abuse that they've taken from golden ears including Reiss shows us that blaming the victim is not at all beneath them.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #152
I think the M&M paper made a very interesting unintentional point regarding the provenance of most so-called hi-res recordings, something that is far too often ignored in all the victim blaming.

So far, the only hi-res proponent I have the slightest bit of admiration for is Dr. Mark Waldrep, simply because he doesn't believe in voodoo and tries to debunk as much industry bullshit as he can, and because he is completely honest about not being able to hear a difference, so his approach to hi-res seems to basically be "better safe than sorry, let's record as much of the sound as possible, even the stuff we can't hear, we'll figure out if it actually matters later."

 

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #153
I think the M&M paper made a very interesting unintentional point regarding the provenance of most so-called hi-res recordings, something that is far too often ignored in all the victim blaming.

It is obvious that there was and may still be a common practice of selling recordings with reduced resolution as high resolution music and files. This seems to be deceptive and fraudulent. To the best of  my knowledge there are a goodly number of high resolution advocates and promoters who are familiar with this, but to the best of my knowlege they have shown no interest in holding any of the guilty parties responsible. Note that this practice was highly adverse to their stated goal of promoting high resolution audio.

Quote
So far, the only hi-res proponent I have the slightest bit of admiration for is Dr. Mark Waldrep, simply because he doesn't believe in voodoo and tries to debunk as much industry bullshit as he can, and because he is completely honest about not being able to hear a difference, so his approach to hi-res seems to basically be "better safe than sorry, let's record as much of the sound as possible, even the stuff we can't hear, we'll figure out if it actually matters later."

In late 2014 Dr, Wakldrip perpared some files for people to use to demonstate the benefits of high resolution audio to themselves.  This post on the AVS forum was part of the promotion of this effort:  http://www.avsforum.com/forum/91-audio-theory-setup-chat/1598417-avs-aix-high-resolution-audio-test-take-2-a.html#post25638361

The post linked admits that the first generation of these files that were distributed at Dr. Waldrip's request and with his cooperation contained an audible "tell" in the form of a level mismatch that had nothing to do with high resolution audio.

The posts indicates that there was a second generation file correcting this problem, which I can confirm. What I don't see is any discussion of the second "tell" in the form of a variable channel timing mismatch that corrupted these second generation files.

AFAIK Dr. Waldrip was made aware of this problem in wirtign but I know of no action that was taken to correct the situation.

I personally ABXed some of these files and found that the "tell" was indeed audible.  Again, problems like these run counter to the desires of any sincere advocate of high resolution audio by making any positive results obtained by listening to these files possibly even likely due to the audible timing error.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #154
Well darn, and here I thought he was one of the sole lights in the muck of audiophilia :-(

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #155
That raises imho the interesting question which percentage/mark up relation one should consider to be "justifying" . Would "75% "qualify or is "100%" needed? ;)
Isn't that the same question that Archimago asked, just put in other terms?

Quote
I´d recommend, that everyone should listen for himself to see, if "hi- res" is useful and any mark-up justified.
Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself.
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Quote
Thats the way it is. And it was the same when Meyer/Moran came up with their publication, it was just the other camp of believers.
The similarity you try to suggest doesn't go very far. Perhaps you should do some research first, did M&M (or AES) issue a press release? Did they try to convey a different interpretation of their result than what they had written in their paper?

The media are of course what they are. Their simplifications go either way. The crucial point here is how the author contributes to this. Reiss differs rather a lot from M&M in this regard.

Quote
Everybody adressing criticise to Dr. Reiss´s meta-analysis should reread his comments on Meyer/Moran and ask himself if he was nearly as critical back then. And the Meyer/Moran was really seriously flawed and as said back then, after reading it, i didn´t understand which way it could pass the peer review process at the JAES.
As stated before, their hypothesis might nevertheless be true, but the validity of their study was questionable to a degree where no further conclusions are warranted.
I think the criticism against M&M is to a large extent unfair. While they certainly haven't produced a study that's beyond reproach, they have shown quite convincingly and conclusively what they set out to show: That the audiophile claims regarding the inadequacy of the CD format were not true. Their point wasn't a technical one about the CD format, but a check on the credibility of audiophile claims. As such, this result still stands and IMHO will continue to stand.

Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

Quote
I don´t see that the "mass market" is really influenced by anything like this. Audiophiles are just a very small subgroup....
Can it have escaped you that HiRes is being introduced to the mass market right now? The marketing activities to this end are substantial. Perhaps you should leave your audiophile sandbox and look around.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #156
I´d recommend, that everyone should listen for himself to see, if "hi- res" is useful and any mark-up justified.
Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself.
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Nevermind the sad fact that the masters used for the "hi-res" versions are often tweaked compared to the masters used for CDs, giving an audible difference that people will attribute to the format. They could use the exact same master for both versions, but often they don't, either due to incompetence or malicious intent.

The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance. Not very many people have the skills or inclination to do this.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #157
Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

All correct, and a situation that raises critical questions about Reiss' scholarship, his familiarity with the topic, how his  own biases impacted his study,, and intellectual honesty. 

It is a matter of fact that two significantly different audio listening test paradigms share the name ABX. This is covered in an AES public document linked here: http://www.aes.org/forum/?ID=416&c=2927.  dated February, 2015.   One clue to his lack of current scholarship is the presence of a reference to the Jackson paper in his footnotes but apparent ignorance of the critical comments that were made about it on the AES forum a year of more before he published this paper.

The issue was further discussed by Stefan Heinzmann in Dec 2014 in the same paper comments as were previously linked:

"The criticism of the ABX test procedure that is offered in the introduction is poorly justified. The "cognitive load", as called by the authors, is entirely under the control of the listener in an ABX test, since the listener selects when to switch and what to switch to. There is no requirement to keep all three sounds in memory simultaneously, as criticised by the authors. Consequently, it is unclear what advantage the method chosen by the authors offers over an ABX test. Furthermore, the informal use of the term "cognitive load" seems to suggest tacitly, that a higher "load" is detrimental to the ability to distinguish between different sounds. I'm not aware of any study that confirms that. Indeed, one could just as easily suspect the opposite, namely that the availability of more sounds would increase this ability. Neither of those suggestions can of course be taken for granted. The authors shouldn't appeal to their interpretation of common sense when criticising a test method, and rely on testable evidence instead."

In March 2015 Amir Majidimehr wrote:

"Consequently, it is unclear what advantage the method chosen by the authors offers over an ABX test. "

So we have Reiss presenting what amounts to be negative rumors and speculation about a test procedure that was widely used in the studies he chose, without balancing them with other well known comments that would lend much needed objectivity and accuracy to his paper.

He shows similar bias in his paper here:

"3.3 How Does Duration of Stimuli and Intervals
Affect Results?

The International Telecommunication Union recommends
that sound samples used for sound quality comparison
should not last longer than 15–20 s, and intervals
between sound samples should be up to 1.5 s [78], partly
because of limitations in short-term memory of test subjects.
However, the extensive research into brain response
to high resolution content suggests that exposure to high
frequency content may evoke a response that is both lagged
and persistent for tens of seconds, e.g., [22, 48]. This implies
that effective testing of high resolution audio discrimination
should use much longer samples and intervals than the
ITU recommendation implies.
Unfortunately, statistical analysis of the effect of duration
of stimuli and intervals is difficult. Of the 18 studies suitable
for meta-analysis, only 12 provide information about
sample duration and 6 provide information about interval
duration, and many other factors may have affected the
outcomes. In addition, many experiments allowed test subjects
to listen for as long as they wished, thus making these
estimates very rough approximations.
Nevertheless, strong results were reported in Theiss
1997, Kaneta 2013A, Kanetada 2013B and Mizumachi
2015, which all had long intervals between stimuli. In
contrast, Muraoka 1981 and Pras 2010 had far weaker results
with short duration stimuli. Furthermore, Hamasaki
2004 reported statistically significant stronger results when
longer stimuli were used, even though participant and stimuli
selection had more stringent criteria for the trials with
shorter stimuli. This is highly suggestive that duration of
stimuli and intervals may be an important factor.
A subgroup analysis was performed, dividing between
those studies with stated long duration stimuli and/or long
intervals (30 seconds or more) and those that state only
short duration stimuli and/or short intervals. The Hamasaki
2004 experiment was divided into the two subgroups based
on stimuli duration of either 85–120 s or approx. 20 s
[62, 64].
The subgroup with long duration stimuli reported 57%
correct discrimination, whereas the short duration subgroup
reported a mean difference of 52%. Though the distinction
between these two groups was far less strong than when
considering training, the subgroup differences were still
significant at a 95% level, p = 0.04. This subgroup test also
has a small number of studies (14), and many studies in the
long duration subgroup also involved training, so one can
only say that it is suggestive that long durations for stimuli
and intervals may be preferred for discrimination.
[/quote]

As is pointed out by other paper comments comments, since ABX 1982 puts the switching control in the hands of the listener, issues related to length and order of presentation are under his control and he is free to experiment with them to obtain his best possible results. IOW, its a non-issue that he makes into an apparent big problem.

3.4 Effect of Test Methodology
There is considerable debate regarding preferred methodologies
for high resolution audio perceptual evaluation. Authors
have noted that ABX tests have a high cognitive load
[11], which might lead to false negatives (Type II errors). An
alternative, 1IFC Same-different tasks, was used in many
tests. In these situations, subjects are presented with a pair
of stimuli on each trial, with half the trials containing a pair
that is the same and the other half with a pair that is different.
Subjects must decide whether the pair represents the same
or different stimuli. This test is known to be “particularly
prone to the effects of bias [79].” A test subject may have a
tendency towards one answer, and this tendency may even
be prevalent among subjects. In particular, a subtle difference
may be perceived but still identified as ‘same,” biasing
this approach towards false negatives as well.
We performed subgroup tests to evaluate whether there
are significant differences between those studies where subjects
performed a 1 interval forced choice “same/different”
test, and those where subjects had to choose among two alternatives
(ABX, AXY, or XY “preference” or “quality”).
For same/different tests, heterogeneity test gave I
2 = 67%
and p = 0.003, whereas I
2 = 43% and p = 0.08 for ABX
and variants, thus suggesting that both subgroups contain
diverse sets of studies (note that this test has low power,
and so more importance is given to the I
2 value than the p
value, and typically, α is set to 0.1 [77]).

A slightly higher overall effect was found for ABX, 0.05
compared to 0.02, but with confidence intervals overlapping
those of the 1IFC “same/different” subgroup. If methodology
has an effect, it is likely overshadowed by other differences
between studies."

First off there is this very serious problem that Reiss has allowed to persist, which is that we don't know which ABX test he is talking about when he mentions "ABX".  It's not clear that he is aware that there are two different tests that are commonly referred to by the same name. This is a critical failing in someone who seeks to summarize a number of different listening tests that used or referred to these two different listening test methodologies.

Secondly, Reiss does not seem to understand that the 1985 ABX test is often used as a 1IFC test, which is mentioned by Amir Majidimehr in his  paper comments from  June 8:

" The first step in improving my results was ignoring Y.  Likewise the next improvement came from exactly the method used in this paper which was playing A, and playing X and immediately voting one way or the other.  Even as a trained listener, eliminating extra choices was critical for me to generate reliable results."

Thirdly, Reiss seems to assume that there was no listener training at all unless it is specifically mentioned,  IME this is highly improbable because at least some training has IME always been required for a listener to become productive at all.





Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #158
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #159
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2
That exactly is the way the Fitzceraldos these days argue. As i mentioned before 'I'm not saying it is audible, but it is audible'
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #160
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2

If
Quote
"...everyone should "listen" for himself to see, if "hi- res" is useful
meant doing proper listening tests, I would of course agree.  However, not being born yesterday I strongly suspect what is meant is a sighted evaluation, which is worse than useless.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #161
'I'm not saying it is audible, but it is audible'
Or, slightly more sophisticated: 'You may very well say it is audible, but I couldn't possibly comment.'

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #162
The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance.

Agreed.

Quote
Not very many people have the skills or inclination to do this.

The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high school student.

Having the inclination is a different thing, because one might find  out some uncomfortable truths.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #163
Not very many people have the skills or inclination to do this.

The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high student.

Having the inclination is a different thing, because one might find  out some uncomfortable truths.

Nevertheless, most people will consider it far too complicated to even consider trying. And I'm not talking about the general population here, I am talking about most music lovers as well.

Most people just want to "press button, get music".

And quite often this takes the form of a streaming service, so they are even further removed from file formats and compression and so on. They'll be happy with Spotify or Apple Music, or they'll consider themselves "crafty consumers" and subscribe to services like Tidal or WIMP Hifi, that offer lossless streaming, and MQA at some point, because they heard it's "better".

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #164
The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high student.
Interestingly, and against intuition, I have found the less bright students need less pot, while the really smart ones use up all the ganja.
;-)

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #165
That exactly is the way the Fitzceraldos these days argue. As i mentioned before 'I'm not saying it is audible, but it is audible'
I'm a 73 yr old elite hearing athlete, use ML large panel low efficiency planar speakers, which zero chance of either >16bit dynamic range or "hypersonic" capability...and I can "hear" High as a kite Re$ just fine. I don't drink no Koolaid like those audiophools either!
Whether fancy wires, magic caps, Hi Re$ is even audible and worthwhile or not to you with your own choice of music formats is entirely up to you...not rational, objective, demonstrable science. Just "listen" and decide.
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #166
In late 2014 Dr, Wakldrip perpared some files for people to use to demonstate the benefits of high resolution audio to themselves. 
He seems like a reasonable guy, so I asked him to provide some tracks specifically for demonstrating the limitations of Redbook in a consumer environment, with a system capable of both >16bit dynamic range and "hypersonic" capability.
Stay tuned...
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #167
elite hearing athlete
Nice! I never saw it sportsmanlike till now.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #168
Interestingly, and against intuition, I have found the less bright students need less pot, while the really smart ones use up all the ganja.
;-)

The audiophile pot smokers I know can't be pried away from their vinyl...

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #169
The audiophile pot smokers I know can't be pried away from their vinyl...

That's because a double album makes for a nice surface to clean their pot on :-)

And also maybe because "oh man, it just goes around and around and around and around..."

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #170
Correct. Reiss's meta study seems to fail on the grounds that the tests that were used to make up his study are not consistent with each other. IOW it's not a collection of tests of apples, but rather a conflation of tests of just about every fruit and vegetable in the store.

That is what makes meta-analysis something more than the mere vetting of data ...

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #171
<snip>
Isn't that the same question that Archimago asked, just put in other terms?

I think not at the webpage krabapple linked, where he only expressed that 52,x % is to low to justify the markup. The difference to my question is obvious. He could have instead argued with the ~60% of trained listeners, would that justify? No to mention that these numbers are estimates for population parameters. Individuals within the population will most likely do worse and better.

Quote
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Nevertheless it is what i recommend. In the same sense i recommend first to learn about the method of meta-analysis (systematic reviews) about the statistics and to carefully read and analyze what authors of meta-analysis have written and done. Especially in a forum section that proudly is labeled as "scientific discussion" .Obviously most people do something completely different; they believe strongly in something and everything they read will evaluated according to this believe structure- simply confirmation bias at work.
Nevertheless i recommend .......;)

Quote
The similarity you try to suggest doesn't go very far. Perhaps you should do some research first, did M&M (or AES) issue a press release? Did they try to convey a different interpretation of their result than what they had written in their paper?

No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication. And they "oversold" (copyright pelmazo) their work right in the published article.
"Unlike the previous investigations, our tests were designed
to reveal any and all possible audible differences
between high-resolution and CD audio.... "
(E. Brad Meyer, David R. Moran; J. Audio Eng. Soc., Vol. 55, No. 9, 2007 September, page 776)

Quote
The media are of course what they are. Their simplifications go either way. The crucial point here is how the author contributes to this. Reiss differs rather a lot from M&M in this regard.

I can´t agree at this point (just from a semantic point of view, provided the statistical analysis holds true).
The AES press release mentioned in the second paragraph the novelty of the analysis approach (which it is afaik in the audio field), the numbers were reported correctly as in the universites press release as well. Of course "training improves dramatically" reads dramatic at first, but as the "dramatic" increase _to_ 60% is reported directly in the same sentence ....

Quote
I think the criticism against M&M is to a large extent unfair.
"to a large extent" ? No, they received also unfair criticism, as Reiss does (look just at the comments in this thread), which is sad, especially if it is called "critic based on science" , but a lot of critique was justified.

Quote
While they certainly haven't produced a study that's beyond reproach, they have shown quite convincingly and conclusively what they set out to show: That the audiophile claims regarding the inadequacy of the CD format were not true. Their point wasn't a technical one about the CD format, but a check on the credibility of audiophile claims. As such, this result still stands and IMHO will continue to stand.

What you have stated above is a typical "post-hoc-hypothesis" based on the actual results (reminds to the texican sharp shooter fallacy) and it wasn´t what they pretended to do (see the quote above) and, due to the serious flaws, further conclusions are highly questionable. As stated before, after reading the article i could not believe that it could have passed the review process and that fealing was emphasized after reading the supplementary information on the BAS website. No accurate measurements, detection of a broken player (not mentioned in the JAES afair) without knowing about the number of trials done in broken state, no tracking of the music used, different number of trials for the listeners, no information about the number of trials done on each location, no follow up although subgroup analysis showed some "suspicous" results and so on.

Don´t get me wrong- i have great respect for everybody doing this sort of experiment, because it is a lot of work, but otoh - given the fact that there is a plethora of literature covering DOE and sensory tests - i don´t understand why such simple errors, which could have been easily avoided, were still made.

Quote
Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

Is it what Reiss did or is it a strongly biased interpretation ? ;)
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data. Reiss analysis did not show any significant impact of the test protocal and so he reported "if methodology has an impact, it is likely overshadowed by other differences between studies"
(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 372)


Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #172
I think the Big Issue is what ajinfla has touted repeatedly:

Quote
a small but important advantage in its quality

First, you should not use "important" when you mean "significant" (in the statistical sense). That is what you would expect from a dumb machine translating back and forth.
Second, you should not use "advantage" for "difference", as long as you have not ruled out audible adverse artifacts sub-22 kHz. That is what you would expect from a dumb machine interpreting TOS8.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #173
<snip>
First, you should not use "important" when you mean "significant" (in the statistical sense).

To quote Dr. Reiss from the AES press release:
"“Audio purists and industry should welcome these findings,” said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

Quote
Second, you should not use "advantage" for "difference", as long as you have not ruled out audible adverse artifacts sub-22 kHz. <snip>

That is a valuable concern; analysis will show to what extend.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #174
[...] around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

This, I assume, mixes up two concepts. Statistical significance means that we have so many data points that we can be confident that these what-looks-like-sixty, is not fifty. (Oh, that was a rough one.)

And if the actual number is indeed 52.3, it does not mean that there is no practical relevance. Not that I suggest that the situation is as follows, it is just for the sake of the illustration:
If we have done so many tests that we have virtually no confidence interval around it, it could be that one in twenty samples had extreme artifacts. (That would mean that out of 100, there are 95 where you guess fifty-fifty, accumulating a score of 47.5, plus the 5 where you are nearly universally right.)  I would call that relevant in practice.