Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluation (Read 97784 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #175
Quote
Reiss - these results imply that, though the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown

He could have instead argued with the ~60% of trained listeners
Trained at hearing what?

No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.
Step away from the ganja/crack pipe please Jakob2.
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #176
To quote Dr. Reiss:
Quote
In summary, these results imply that, though the effect is perhaps small and difficult to detect but important advantage, the perceived fidelity of an audio recording and playback chain is affected quality of reproduction over standard audio content by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown, Trained listeners could distinguish between the two formats around sixty percent of the time, this perceived effect advantage can be confirmed with a variety of statistical approaches and it can be greatly improved through training.

FIFY



Loudspeaker manufacturer



Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #179
If the CD-format is really transparent..
Which of the selected tests were for that question Jakob2 ?
It was Dr. Reiss´s starting point, so why do you ask for selected tests?

Here's the logical starting  point for Reiss's paper, its abstract:

"There is considerable debate over the benefits of recording and rendering high resolution
audio, i.e., systems and formats that are capable of rendering beyond CD quality audio.
We undertook a systematic review and meta-analysis to assess the ability of test subjects to
perceive a difference between high resolution and standard, 16 bit, 44.1 or 48 kHz audio. All
18 published experiments for which sufficient data could be obtained were included, providing
a meta-analysis involving over 400 participants in over 12,500 trials. Results showed a small
but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training. This
result was verified by a sensitivity analysis exploring different choices for the chosen studies
and different analysis approaches. Potential biases in studies, effect of test methodology,
experimental design, and choice of stimuli were also investigated. The overall conclusion
is that the perceived fidelity of an audio recording and playback chain can be affected by
operating beyond conventional levels."

I see no mention of tests of any media, CD or not, for transparency.  Perhaps, if such a thing exists, you could quote it from his paper?

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #180
[...] around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

This, I assume, mixes up two concepts. Statistical significance means that we have so many data points that we can be confident that these what-looks-like-sixty, is not fifty. (Oh, that was a rough one.)

No. Statistically significant means in our case that the probability to get an observed result by chance is lower than our predifined criterion (i.e. level of significance).
Having a big sample size means that smaller differences get statistically significant; usual wisdom says that every experiment gives an significant result provided the sample size is big enough.
Thats why we should differentiate between statistical relevance and practical relevance.

Everthing else remaining the same, bigger sample size means the confindence intervall narrows as the variance gets lower.

Quote
And if the actual number is indeed 52.3, it does not mean that there is no practical relevance.

That´s why i wrote of "limited practical relevance" ....

Quote
<snip> I would call that relevant in practice.
Leaving aside for the moment what i suppose being a misunderstanding of confidence intervalls and level of significance, the meaning of "practical relevance" is another one.
It means that a difference is relevant in practical terms of usage in every day life.

60% compared to 50% is usually considered to be of practical relevance, hence (i assume) Dr. Reiss used the word "important" and not only statistically significant.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #181
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)


Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #182
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)


To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)

All perceivable content from live sound is not the same thing as sonic transparency. You can hear all perceivable content, but if it is colored in any way, the timing is off,, or there are additional signals that were not in the original sound,  transparent reproduction remains elusive.

Reiss says:  "This question of perception of high resolution audio has generated heated debate for many years."

The audio my be perceived, but still be different from the original source.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #183
["Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

The question he says he seeks to answer is whether or not there is a difference between the alleged high resolution recording and the CD. He's saying he is interested in any difference, not just differences in the direction of greater accuracy.

This is a mistake that the audiophiles he seeks to please make all the time. They don't use reliable absolute references, and are pleased with any difference using the presumption that  if its is different it has to be better.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #184
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient to capture all perceivable content from live sound.
This question of perception of high resolution audio has generated heated debate for many years. Although there
have been many studies and formal arguments presented in relation to this, there has yet to be a rigorous analysis of the literature. "
and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."
Careful your hands don't fall off waving them that hard Jakob2

Jakob2, what tests were for Redbook transparency and where was any "advantage" found, trained or not, hearing what exactly?
What is the "advantage" of Hi-Re$ ?
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #185
I think not at the webpage krabapple linked, where he only expressed that 52,x % is to low to justify the markup. The difference to my question is obvious.
Archimago's question is: Is it worth it. You ask the same, and merely play with the actual numbers. If that is the obvious difference, fine.

Quote
Nevertheless it is what i recommend.
Fine, too. You're free to recommend what you want.

Quote
No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.
That's a grave accusation to make without any evidence.

Quote
And they "oversold" (copyright pelmazo) their work right in the published article.
"Unlike the previous investigations, our tests were designed
to reveal any and all possible audible differences
between high-resolution and CD audio.... "
(E. Brad Meyer, David R. Moran; J. Audio Eng. Soc., Vol. 55, No. 9, 2007 September, page 776)
That's looks more swanky when ripped out of context than in the paper. It is not difficult to figure out what they wanted to say if you read it with a slightly cooler mind.

Quote
I can´t agree at this point (just from a semantic point of view, provided the statistical analysis holds true).
The AES press release mentioned in the second paragraph the novelty of the analysis approach (which it is afaik in the audio field), the numbers were reported correctly as in the universites press release as well. Of course "training improves dramatically" reads dramatic at first, but as the "dramatic" increase _to_ 60% is reported directly in the same sentence ....
I don't understand how this is an answer to what I wrote, and I don't understand what you want to say (other than not agreeing with me).

Quote
No, they received also unfair criticism, as Reiss does (look just at the comments in this thread), which is sad, especially if it is called "critic based on science" , but a lot of critique was justified.
A lot of criticism relies on selective reading, goalpost shifting, strawman attacks and the like. But that's worth a different topic and shouldn't be repeated here.

Quote
What you have stated above is a typical "post-hoc-hypothesis" based on the actual results (reminds to the texican sharp shooter fallacy) and it wasn´t what they pretended to do (see the quote above) and, due to the serious flaws, further conclusions are highly questionable.
Their motivation and aim is quite clear when one reads their whole text instead of cherry picking convenient snippets. They start with the claims of superiority of SADC and DVD-A over the CD and set out to check them with controlled blind tests. Their aim is to cover the whole range of potential effects arising from the technical differences between the media, rather than focusing on a particular aspect like the wordlength. I don't see a problem with this, except perhaps with the exact wording in a few places.

Their method is largely appropriate for what they want to achieve. Specifically, testing a medium through a direct path vs. the same medium through a restricted path is the most logical and straightforward way to do this, because it removes all factors that could be attributed to different source material. Moreover, it is simple enough to be used by ordinary people who want to know for themselves whether they hear a difference between CD format and higher formats.

And it is worth a reminder that the proper way of fixing the flaws in a test is to run another test with the flaws fixed. In the ten years since the M&M tests were run, no audiophile interest group seems to have countered with anything even remotely appropriate. Isn't that telling something, too? Perhaps how cheap the criticism is in comparison to running a credible and convincing test? Who would be in a better position to do this than those who pretend to know what to listen for, what to listen with, how to test and how to evaluate it? Wouldn't that be much better than such a meta analysis?

Quote
As stated before, after reading the article i could not believe that it could have passed the review process and that fealing was emphasized after reading the supplementary information on the BAS website. No accurate measurements, detection of a broken player (not mentioned in the JAES afair) without knowing about the number of trials done in broken state, no tracking of the music used, different number of trials for the listeners, no information about the number of trials done on each location, no follow up although subgroup analysis showed some "suspicous" results and so on.
Before foaming at the mouth over this, it might be worthwile checking what the potential of these alleged flaws was to corrupt the result. There's hardly a study that couldn't be criticised in a similar way if only put under a similar amount of scrutiny. You know this better than anyone else: I know your talent and resolve for years now at arguing the oxygen out of the air when you don't like the conclusion.

But I agree that the AES review process leaves something to be desired. It shows in Reiss' paper, too. Apart from the problems with the content and its interpretation, what caught my eye quite quickly was the inconsistent way of referencing literature. Some of it is in the traditional JAES style using reference numbers, some is using a name and year style as is more typical of textbooks. Surely this would have registered in a review, if I notice it within minutes?

Quote
Don´t get me wrong- i have great respect for everybody doing this sort of experiment, because it is a lot of work, but otoh - given the fact that there is a plethora of literature covering DOE and sensory tests - i don´t understand why such simple errors, which could have been easily avoided, were still made.
It is especially difficult and laborious if you have to brace and defend your work against even the pettiest and defeatist objections that might appear.

Quote
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data. Reiss analysis did not show any significant impact of the test protocal and so he reported "if methodology has an impact, it is likely overshadowed by other differences between studies"
(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 372)
He certainly appears to have done that very selectively. He references and uses quite a number of papers without taking notice of the sometimes very substantial critique and debate they have attracted. For example take the papers by Kunchur.

For some reason, the M&M study is the only exception, and here he shows not only that he is aware of the debate and where it took place, he also picks exclusively the negative points. If he had done what you say in the usual impartial way, I wouldn't have a bone to pick. But alas, that's not how it turned out. It will be hard for him to shake off suspicions of bias this way.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #186
Is it what Reiss did or is it a strongly biased interpretation ? ;)

It was absolutely biased as pro high rez as Reiss presumably thought he could get away with. The major changes of the outcome of his study to magnify pro high rez evidence that have been pointed out between the actual paper and his press release  shows that quite clearly.

Quote
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data.

As I showed in another post, that is most definitely what Reiss didn't do.

Many of the pro-high rez test results he used have been thoroughly criticized, and in many cases quite effectively. You'd never know it from Reiss's paper.

In contrast Reiss used rumor and speculation to criticize studies that didn't support high rez as thoroughly as he presumably wanted.


Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #187
<snip>
All perceivable content from live sound is not the same thing as sonic transparency. You can hear all perceivable content, but if it is colored in any way, the timing is off,, or there are additional signals that were not in the original sound,  transparent reproduction remains elusive.

Normally i´d say agree to disagree, especially given the context, but maybe i miss something.


Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #188
<snip>
Nevermind the sad fact that the masters used for the "hi-res" versions are often tweaked compared to the masters used for CDs, giving an audible difference that people will attribute to the format. They could use the exact same master for both versions, but often they don't, either due to incompetence or malicious intent.

The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance. Not very many people have the skills or inclination to do this.

But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #189
But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
Which of Reiss's selected papers had anything to do with end user content and what does the paper have to do with end user content?
Jakob2, each of your evasions of such questions, provide the answers.
Loudspeaker manufacturer

 

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #190
But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.
If the HiRes version is indeed produced to higher standards, the markup may be justified. The same business model was already used many years ago with the CD, where you could sometimes buy improved versions for a markup (i.e. for a different mastering). This also shows that this possibility doesn't depend on a different format, improved versions can be provided independently from the format used.

I don't think such practices show malicious intent or incompetence. As long as they communicate the facts correctly, they enrich the consumer's choice. I don't see how anybody can object this.

However, HiRes proponents seem to want to get to a point where consumers believe that the better quality depends on the HiRes format, in other words they work to establish the misconception that there is a direct relationship between perceived quality and the format. Once established, this misconception can be exploited to sell material in a HiRes format with a markup, even when it doesn't offer any quality advantage. At this point the customer is being duped into paying more for the same.

While this may be dismissed as speculation, there are increasingly convincing indications that this is actually happening. Effectively, the HiRes proponents are preparing the ground for this swindle, whether they are aware of it or not, whether they support it or not.

I believe that if we don't fight this attempt at deception, the entire pro audio profession will be affected by a credibility backlash, no matter whether guilty or not.

Quote
If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
Intent can only rarely be proven. We have to work on the basis of what people do and what positions they support.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #191
No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
You do know crack killed Applejack....right???
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #192
<snip>
Archimago's question is: Is it worth it. You ask the same, and merely play with the actual numbers. If that is the obvious difference, fine.
Archimago first asks if the difference is worth the markup; and secondly he asks, if 52.3% accuracy rate in a research setting sound like a valuable proposition to grab "hi-res" .
This i addressed (as said before, the consumer is only able to compare what is offered and to buy or not to buy), asking which "accuracy rate" would justify ......

Quote
That's looks more swanky when ripped out of context than in the paper. It is not difficult to figure out what they wanted to say if you read it with a slightly cooler mind.

The same holds true for the press releases for Reiss´s meta-analysis.

Quote
A lot of criticism relies on selective reading, goalpost shifting, strawman attacks and the like. But that's worth a different topic and shouldn't be repeated here.

If the same people comment on similar issues in a very different way it should be mentioned here.

Quote
Their motivation and aim is quite clear when one reads their whole text instead of cherry picking convenient snippets.
"convenient snippets" in the case of "overselling". I don´t care so much about this topic if the same degree of "good will in interpretation" is applied in each case.

Quote
They start with the claims of superiority of SADC and DVD-A over the CD .......

Last time you said they addressed "audiophile claims" but in fact, they didn´t really specify what their target was. At least did Meyer/Moran seperate recording engineers claiming superiority from audiophiles claiming "whatever still to specify" ....

Quote
......and set out to check them with controlled blind tests.

And as usual if no research hypothesis is clearly specified the test regime (and level of control) isn´t as good as it should be.

Quote
Their aim is to cover the whole range of potential effects arising from the technical differences between the media, rather than focusing on a particular aspect like the wordlength. I don't see a problem with this, except perhaps with the exact wording in a few places.

That you don´t see a problem is a bit surprising, because you expressed strong concerns (even invalidity) wrt to Reiss´s meta-analysis as he did not focus on a "particular aspect like the wordlenght" ........

Notwithstanding that they (Meyer/Moran) combined wordlength and sampling frequency effects, they did not check if any "enhancement" was delivered at all.

Quote
Their method is largely appropriate for what they want to achieve.

It depends on the hypothesis; if it was the vague sort of "audiophile claims" that you mentioned the last time, it must have been a claim like "if a disc is labelled as hi-res it will under all circumstances , by all listeners at all times perceived as better as a downsampled to CD quality version" .

Because they did not check for "hi-res ness", they did not really check for the quality of reproduction, they did not provide positive controls, they did not really track which music was used in the trials and so on.
And they used mostly a 10 trials per listener approach, which is surprising because they should have known at least since Leventhals articles about power, that a small number of trials is accompanied by a large risk of error type II. 

Quote
Specifically, testing a medium through a direct path vs. the same medium through a restricted path is the most logical and straightforward way to do this, because it removes all factors that could be attributed to different source material. Moreover, it is simple enough to be used by ordinary people who want to know for themselves whether they hear a difference between CD format and higher formats.

I did not criticize their "path choice" but that they have not provided thorough measurements before starting the experiment and checking routinely during the experiment.

Quote
And it is worth a reminder that the proper way of fixing the flaws in a test....
Sorry, but first of all flaws must be mentioned, because bad science is bad science. It does harm the reputation of science if these methodological flaws were belittled.
What you proposed would be a real "cheap get out of jail approach" ;)

Quote
....is to run another test with the flaws fixed. In the ten years since the M&M tests were run, no audiophile interest group seems to have countered with anything even remotely appropriate. Isn't that telling something, too? Perhaps how cheap the criticism is in comparison to running a credible and convincing test? Who would be in a better position to do this than those who pretend to know what to listen for, what to listen with, how to test and how to evaluate it? Wouldn't that be much better than such a meta analysis?

Mhm, further research is recommended? Me thinks Dr. Reiss did not just recommend further research, but could, based on his meta-analysis give some advice to achieve better quality in experiments. :)

Quote
Before foaming at the mouth over this, it might be worthwile checking what the potential of these alleged flaws was to corrupt the result. There's hardly a study that couldn't be criticised in a similar way if only put under a similar amount of scrutiny. You know this better than anyone else: I know your talent and resolve for years now at arguing the oxygen out of the air when you don't like the conclusion.

What you forgot to mention is, that i did critisize experiments with the same scrutinity although i should like the results (according to your assumptions). And you should have mentioned, that i routinely recommended blind home listening experiments for any listener to learn about his perception and gave a lot of advice to improve the quality of listening experiments.

And you should have noticed, that i (nearly always) did emphasize beside all criticism that Meyer/Moran´s hypothesis ("hi-res" does not offer a perceivable difference or advantage compared to CD quality) might be true.

Quote
But I agree that the AES review process leaves something to be desired......
Something that you forgot to mention provided you liked the published findings (see for example Meyer/Moran, or did i miss it? ) ...

Quote
It shows in Reiss' paper, too.

I´m sorry, but the flaws in Meyer/Moran´s experiment (further more if the additional information is considered) are evident just by reading, provided the reviewer has any expience in the DOE. No complicated analysis was needed to realise that.
In the case of Reiss´s analysis a reviewer must imho do a lot of work to find something.

Quote
Apart from the problems with the content and its interpretation, what caught my eye quite quickly was the inconsistent way of referencing literature. Some of it is in the traditional JAES style using reference numbers, some is using a name and year style as is more typical of textbooks. Surely this would have registered in a review, if I notice it within minutes?

Do we really qualify questions of style as equally important as methodological flaws??

Quote
It is especially difficult and laborious if you have to brace and defend your work against even the pettiest and defeatist objections that might appear.

You mean something like "he shouldn´t have used important, but significant instead" or "his press release overemphasized this or that", or he (maybe) didn´t notice that others criticized an older variant of "ABX"?
Isn´t the fact, that we in this thread mainly discuss semantics instead of real errors in the analysis telling?
One poster even mentioned Leventhal´s publication in the JAES because he felt that Dr. Reiss might have given rise to the impression to be the first one to talk about the importance of Type II Errors !?

Quote
He certainly appears to have done that very selectively. He references and uses quite a number of papers without taking notice of the sometimes very substantial critique and debate they have attracted. For example take the papers by Kunchur.

Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.

Quote
For some reason, the M&M study is the only exception, and here he shows not only that he is aware of the debate and where it took place, he also picks exclusively the negative points.

Shouldn´t you complain instead that he was unfair to Kunchur? Without further discussion he excluded his test results from his meta-analysis. :)

He explained why Meyer/Moran got detailed remarks and why their result couldn´t be used generally but only in parts of the analysis. Nothing wrong with that and he even find encouraging words (at least in my opinion) in stating:
"However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions."

Quote
If he had done what you say in the usual impartial way, I wouldn't have a bone to pick. But alas, that's not how it turned out. It will be hard for him to shake off suspicions of bias this way.

May be you just want him to be biased.
He could have express quite stronger critique, but didn´t and you should be able to precisely argue where his reasoning to not include their results is wrong.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #193
based on his meta-analysis give some advice to achieve better quality in experiments. :)
It's been over 20yrs. When can we expect these "better quality in experiments" from believers/peddlers of Hi-Re$ ?? Who is burdened with such proof?
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #194
Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.
Jakob2, since you umm, "understand Kunchur", please explain how his test demonstrated lack of transparency of Redbook music and thus need for Hi-Re$.
As always, you evasion provides answers.
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #195
If the HiRes version is indeed produced to higher standards, the markup may be justified. The same business model was already used many years ago with the CD, where you could sometimes buy improved versions for a markup (i.e. for a different mastering). This also shows that this possibility doesn't depend on a different format, improved versions can be provided independently from the format used.

The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #196
The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.
Ahh but you see, with the DSD or 20+ bits, you get the "artists intent".
With 16/44 you get a "smeared" mess.
Hence more $$
Loudspeaker manufacturer

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #197
Archimago first asks if the difference is worth the markup; and secondly he asks, if 52.3% accuracy rate in a research setting sound like a valuable proposition to grab "hi-res" .
This i addressed (as said before, the consumer is only able to compare what is offered and to buy or not to buy), asking which "accuracy rate" would justify ......
If the consumer would base his purchase decision only on his own comparison between the different offerings, there would be no need for him to read any studies or meta studies. We both know this, and we can assume Archimago knows it, too. I have noticed your recommendation, which I commented on already.

Archimago's question therefore is relevant for those (the majority IMHO) who are being influenced by what they perceive as established "wisdom of the educated". The question is therefore, whether a study that has such a narrow outcome should play any significant role in a buying decision, and I understood your question as as pretty much the same, only looking at it from the other direction, namely whether a study with a more clear-cut result should play a role, and from what point on.

Either way, the question's purpose is obviously not to provoke a universal answer, but to make the reader consider where his/her confidence level would be.

But this is really just the type of hair-splitting argument which you are so fond of.

Quote
The same holds true for the press releases for Reiss´s meta-analysis.
No, even when reading the entire press release the message remains the same: This is the study the industry and audiophiles was waiting for, and it confirms their position. The reception of the press release I have seen across the internet picks up this message almost invariably. That's not their fault or distortion, that's exactly the gist of Reiss' message. It is not the result of his study, however.

It certainly doesn't look like an accident. Reiss isn't naïve, he knows what he's doing, I'm sure. One has to assume that the message as it was being picked up was pretty much the message he wanted to send.

Quote
I don´t care so much about this topic if the same degree of "good will in interpretation" is applied in each case.
If, as is often the case, one faces the choice whether to assume malice or incompetence as the reason for an act, the saying goes that one should choose incompetence. I don't think, however, that this is always the interpretation that represents the "better will". ;)

Quote
Last time you said they addressed "audiophile claims" but in fact, they didn´t really specify what their target was. At least did Meyer/Moran seperate recording engineers claiming superiority from audiophiles claiming "whatever still to specify" ....
I don't think that this distinction matters much in the context we're in here.

Quote
And as usual if no research hypothesis is clearly specified the test regime (and level of control) isn´t as good as it should be.

Quote
That you don´t see a problem is a bit surprising, because you expressed strong concerns (even invalidity) wrt to Reiss´s meta-analysis as he did not focus on a "particular aspect like the wordlenght" ........
The problem appears at the very moment when significance is found. Then, you would wish to know which particular aspect is responsible for the perceived difference. It is also of particular relevance in a meta analysis, because of their inherent sensitivity to "comparing apples with oranges".

Perhaps it surprises you (which I'm not sure of, your surprise may be a purely rethorical device), but I don't find reason for surprise here. It is actually quite simple: If a study that tests all aspects (wordlength, samplerate) at once doesn't find any significance, none of the individual aspects have been shown to have significance. If significance is found, however, you don't know very much because you still need to identify the reason. Had the M&M test resulted differently they would have had the problem, but as we all know that's not how it turned out. Reiss has the problem, and due to the fact that his study is a meta study he has it already when choosing the base studies.

Quote
Notwithstanding that they (Meyer/Moran) combined wordlength and sampling frequency effects, they did not check if any "enhancement" was delivered at all.
If no difference is perceived, the question is moot whether there was an enhancement.

Quote
It depends on the hypothesis; if it was the vague sort of "audiophile claims" that you mentioned the last time, it must have been a claim like "if a disc is labelled as hi-res it will under all circumstances , by all listeners at all times perceived as better as a downsampled to CD quality version" .
The discs used seem to have been examples of discs which audiophiles claimed to be audibly better than CD. The claims haven't been picked out of thin air by M&M.

And do I really have to rebuke your blatant exaggeration?

Quote
Because they did not check for "hi-res ness", they did not really check for the quality of reproduction, they did not provide positive controls, they did not really track which music was used in the trials and so on.
And they used mostly a 10 trials per listener approach, which is surprising because they should have known at least since Leventhals articles about power, that a small number of trials is accompanied by a large risk of error type II.
It is still very unlikely that - had there really been clearly audible differences between original and CD downsampled version - they would have slipped through.

Besides, how do you check for "hi res ness" if people (claimants) have varying notions of what this means? Which definition should you pick? How would you test? M&M did the sensible thing: They avoided the question by taking material that was being presented to them as being hi res. The fact that some of it wasn't, according to some people's definition, shouldn't have prevented finding audibility at least with some of the material.

The accusation is in the sense unfair, that M&M are being made responsible for something they are not responsible for, namely the vague definition of what constitutes high res.

If you think that it should be defined more stringently, and the material tested for compliance before being used in the test, then you ought to devise a corresponding test. It would be welcomed quite broadly, I trust.

Quote
I did not criticize their "path choice" but that they have not provided thorough measurements before starting the experiment and checking routinely during the experiment.
If the equipment "fault" you criticise should be the small linearity problem of one player that was discernible with one disk, I remind you to consider how and how much that could have compromised the result. I would not even go as far as calling this a fault, owing to its small scale. Using this as a pretext for dismissing the study is completely out of proportion, IMHO. I much rather have the impression that M&M dutifully rectified the problem once they became aware of it. I trust they would have questioned the results of their own study if they had come to conclude that the fault was of sufficient magnitude to affect the result.

Quote
Sorry, but first of all flaws must be mentioned
Which M&M did in the case of this player problem, at least as part of the supplementary information published on their website. Had they considered the problem relevant, I believe they would have described it in the paper itself.

Quote
, because bad science is bad science. It does harm the reputation of science if these methodological flaws were belittled.
What you proposed would be a real "cheap get out of jail approach" ;)
I could feign surprise here that you don't object to Reiss' usage of a number of papers without even mentioning their flaws, but knowing you, I won't.

Quote
Mhm, further research is recommended? Me thinks Dr. Reiss did not just recommend further research, but could, based on his meta-analysis give some advice to achieve better quality in experiments. :)
I wouldn't have a problem with that. :)

Quote
What you forgot to mention is, that i did critisize experiments with the same scrutinity although i should like the results (according to your assumptions). And you should have mentioned, that i routinely recommended blind home listening experiments for any listener to learn about his perception and gave a lot of advice to improve the quality of listening experiments.
This description is just as selective as you accuse mine to be. ;)

Quote
And you should have noticed, that i (nearly always) did emphasize beside all criticism that Meyer/Moran´s hypothesis ("hi-res" does not offer a perceivable difference or advantage compared to CD quality) might be true.
You typically made this look as if you were saying that some material may not offer such a perceivable difference, in other words you have no guarantee that every hi res file is superior to CD quality. That's not a very courageous assertion. In such cases, when things are being asserted that should be self-evident, my mind can't help suspecting that there is another, subliminal message involved. ;)

Quote
Something that you forgot to mention provided you liked the published findings (see for example Meyer/Moran, or did i miss it? ) ...
I mentioned this only because you brought up the topic. I don't say that Reiss' paper is tainted by poor reviewing. It is not the reviewers who are responsible for Reiss' faults, and neither are they responsible for the faults in other papers. So why should I have mentioned it in the context of M&M, if the connection is coincidental rather than causal?

The problem is that such sloppy reviewing reduces the benefit of having a review in the first place. The designation of a paper as being "peer reviewed" may not mean much anymore.

Quote
I´m sorry, but the flaws in Meyer/Moran´s experiment (further more if the additional information is considered) are evident just by reading, provided the reviewer has any expience in the DOE. No complicated analysis was needed to realise that.
In the case of Reiss´s analysis a reviewer must imho do a lot of work to find something.
It depends on how familiar you are with the references Reiss used. If you don't know any of the papers, and have to go through them to see how their content matches up with what Reiss makes of it, then indeed you have a lot of work on your hands.

Quote
Do we really qualify questions of style as equally important as methodological flaws??
I certainly don't. I'm just wondering why this wasn't caught in review. In no way do I suggest that this amounts to a methodological flaw, or something of equal importance. Again: I can distinguish between the quality of a paper and the quality of the review.

Quote
You mean something like "he shouldn´t have used important, but significant instead" or "his press release overemphasized this or that", or he (maybe) didn´t notice that others criticized an older variant of "ABX"?
Isn´t the fact, that we in this thread mainly discuss semantics instead of real errors in the analysis telling?
We are discussing both, but it is hard to avoid going into discussions about semantics when you are involved. Don't accuse anyone else for something that you bring with you. ;)

Quote
One poster even mentioned Leventhal´s publication in the JAES because he felt that Dr. Reiss might have given rise to the impression to be the first one to talk about the importance of Type II Errors !?
It wasn't me, I trust.

Quote
Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.
I don't mix it up, I put it into perspective.

Quote
Shouldn´t you complain instead that he was unfair to Kunchur? Without further discussion he excluded his test results from his meta-analysis. :)
He excluded them as two of 11 studies that were testing auditory perception resolution. Table 1 shows that quite clearly. There is a bit of discussion about this, and Reiss notes that they may suggest the underlying causes of discrimination, if there should be any. I am OK with this choice and its justification.

I am more critical of his usage of Kunchur's works as support for the suggestion that humans have a monaural temporal timing resolution of 5 µs. He uses language that keeps him neutral regarding these claims, but his presentation ignores all criticism that has been voiced. I don't think that's OK. Uncritical mentioning of dubious references increases their perceived credibility, without adding any argument or evidence in their favor.

Quote
He explained why Meyer/Moran got detailed remarks and why their result couldn´t be used generally but only in parts of the analysis. Nothing wrong with that and he even find encouraging words (at least in my opinion) in stating:
"However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions."
I can't help suspecting that some of the given reasons for exclusion were used just because they had been available. In other studies, such information (for example the placement of the players) wasn't given, which of course doesn't mean that there coudn't have been a problem. The argument that the SACD obscures frequencies above 20 kHz is particularly peculiar, since he otherwise included studies that tested wordlength effects, and had no extended frequency range, either.

It is true, however, that M&M didn't make sure that their material actually contained extended frequencies and/or extended dynamic range. The list of material is given on their supplementary website, but going through it and analyzing them would presumably have been excessively laborious. I don't see this as a valid criticism of their test, but it does justify excluding the test from the meta-analysis.

Quote
May be you just want him to be biased.
He could have express quite stronger critique, but didn´t and you should be able to precisely argue where his reasoning to not include their results is wrong.
I don't criticise his decision to exclude M&M's test from the meta analysis. I criticise his rather one-sided assessment of it.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #198
The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.
You make the mistake of extrapolating selling prices from manufacturing cost. We're far from this model in many business areas, and you could argue that in the record business, it never was so.

Alternatively, you could say that the higher price of hires records is justified by their higher marketing cost. Having famous artists spout nonsense about how important hires is, in a youtube clip, doesn't come for free, for example.

The real bugger is when you get same screwed-up version in either format, except for the price difference. The more widely hires penetrates the market, the more we will see this kind of scam, I fear. By going mass market, I think the hires industry is defeating itself at the end. The quality level will be as low as before, the reputation will be ruined at least as thoroughly as before, and the prices won't be kept up, either.

Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation

Reply #199
M&M was even way ahead of time! He disproved the en vogue dsd upsampling delusion. If the SACD layer was done with lowres sources you should at least clearly hear the blackened blacks from the dsd conversion on the SACD layer ;)
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!