HydrogenAudio

Hydrogenaudio Forum => General Audio => Topic started by: ajinfla on 2016-06-28 14:27:53

Title: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 14:27:53
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-28 14:54:56
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)

Note: This is a free download for all, AES member or not.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 15:03:53
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)

But if I read correctly, that article claims that some people can recognize 96 kHz sample rates and probably also quantization effects ... " Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training"
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-28 15:10:11
The paper summarizes existing papers. So when the BS found out people hear beryllium sounds different with high samplerates it mirrors in this paper as positive.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 15:15:00
OK I know that is metaanalysis and I do not want to "promote Hi-Res" through that. But it was kind of surprising to lead the conclusions.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 15:17:23
Right. He makes it clear that there is no good explanation or established correlation. Only that the stats point to a very small % of folks being able to discriminate. Not why. Or whether that's the preferred sound...;-)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 15:25:27
It would have been interesting to find out the characteristics of those people who can discern the differences. Maybe also higher SR are not better, simply different. All in all, interesting. Suggests that there is more to research in this area, while of course craving for Hi-Res remains a bad idea driven by desire to hear "more and better", which is not endless.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-28 16:14:22
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)

But if I read correctly, that article claims that some people can recognize 96 kHz sample rates and probably also quantization effects ... " Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training"

We long ago (late 1970s) discovered that there may be some very weak, but statistically significant results from large-population subjective testing. Because we were doing actual tests, we were able to do what Science suggests, and that is repeat the same test with the same people, music, system, etc., to see if the results were repeatable.

They weren't repeatable.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 16:28:46
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)

But if I read correctly, that article claims that some people can recognize 96 kHz sample rates and probably also quantization effects ... " Results showed a small but statistically significant ability of test subjects to discriminate high resolution content, and this effect increased dramatically when test subjects received extensive training"

We long ago (late 1970s) discovered that there may be some very weak, but statistically significant results from large-population subjective testing. Because we were doing actual tests, we were able to do what Science suggests, and that is repeat the same test with the same people, music, system, etc., to see if the results were repeatable.

They weren't repeatable.

Hmmm .... interesting .....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 17:04:07
Statistics.

The problem with the Meridian (AKA BS) test is that they were banking on a specific outcome and then dressed that pig in lipstick as best they could in order to sell it.  There will be no published repeat, let alone an attempt to refine the test because it likely won't help them sell anything.

Don't lose sight of the motivation.  The advancement of knowledge? Hardly.  It is all about the advancement of the bottom line.  The BS test it is all about commerce.  Sadly, the same goes for what the AES has now become.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 17:21:38
Well, I agree that more research is needed...by believers.
I also agree that listener training is important. Now exactly how these listeners are to be trained, to hear what exactly....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-28 17:34:03
Well, I agree that more research is needed...by believers.
I also agree that listener training is important. Now exactly how these listeners are to be trained, to hear what exactly....
You know where to ask. There are forums full of people hearing filters at 192k and influences of -160dB noise. Wasn't there a new science forum?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 17:45:50
Well, I agree that more research is needed...by believers.
The devout believers don't need any research.

That is probably the most amusing part about all the posturing over a 56% success rate with a body of listeners who were specifically chosen from the general population because they are more likely to spot coloration resulting from nonlinear processing which quite possibly occurred only after  the signal was converted back to analog.

I didn't read the paper, but I'm betting that jjf5 found nothing* to support his fantasy that someday he will discover unicorns in bits 17-24 of his hi-res purchases.

(*) The BS paper presented exactly *zero* evidence to corroborate the notion that trained listeners will "probably also" be able to recognize quantization effects.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 17:54:14
Well, I agree that more research is needed...by believers.
I also agree that listener training is important. Now exactly how these listeners are to be trained, to hear what exactly....
You know where to ask. There are forums full of people hearing filters at 192k and influences of -160dB noise. Wasn't there a new science forum?
Already banned ;-) http://audiosciencereview.com/forum/index.php?threads/is-dsd-superior-or-just-the-audio-file-du-jour.534/page-2#post-15449 (http://audiosciencereview.com/forum/index.php?threads/is-dsd-superior-or-just-the-audio-file-du-jour.534/page-2#post-15449)
The high IQ types have self assessed themselves as Kool Aid free with their "real experiences".
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 17:56:04
I didn't read the paper
Open access...free!!
Nothing groundbreaking.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-28 17:58:05
I really like the procedure Robert Schulein lately did IM testing of a tweeter in a hf listening test. I miss such imho simple fundamentals on other attempts.
brw. Did anyone feel the urge afterwards to start another Droopy video?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-28 18:01:55
Already banned ;-) http://audiosciencereview.com/forum/index.php?threads/is-dsd-superior-or-just-the-audio-file-du-jour.534/page-2#post-15449 (http://audiosciencereview.com/forum/index.php?threads/is-dsd-superior-or-just-the-audio-file-du-jour.534/page-2#post-15449)
The high IQ types have self assessed themselves as Kool Aid free with their "real experiences".
Talking about dsd is not healthy. It may be good for you it ended there.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 18:02:43
Nothing groundbreaking.
I figured it would only serve to rehash a useless discussion.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 18:08:07
Already banned ;-)
If that was your last post there it couldn't have been any more perfect.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 18:21:33
Well, I agree that more research is needed...by believers.
The devout believers don't need any research.

That is probably the most amusing part about all the posturing over a 56% success rate with a body of listeners who were specifically chosen from the general population because they are more likely to spot coloration resulting from nonlinear processing.

I didn't read the paper, but I'm betting that jjf5 found nothing* to support his fantasy that someday he will discover unicorns in bits 17-24.

(*) The BS paper presented exactly *zero* evidence to corroborate the notion that trained listeners will "probably also" be able to recognize quantization effects.

The paper is mainly about sample rates, only chapter 3.5 concerns quantization, dithering etc. Unfortunately listening tests often mix SR and bit rate. I do not expect that I will be able to reliably prove that one can hear above 16 bit, if the audio scientists were not able to.

I know that I may irritate some people here with my statements about preference of 24 bit audio but they are not primarily based on searching unicorns in higher bits. As I expressed before, I simply prefer that container because given todays technology it has no practical limitations and provides (probably bigger than neccessary but we have no 20 bit or so container) safety margin for common usage.  
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 18:32:25
I didn't read the paper, but I'm betting that jjf5 found nothing* to support his fantasy that someday he will discover unicorns in bits 17-24 (EDIT: of his hi-res purchases).
I simply prefer that container because given todays technology it has no practical limitations
Practical?!?  NO!!!!!!!!!!!!!

prac·ti·cal
/ˈpraktək(ə)l/
adjective

    of or concerned with the actual doing or use of something rather than with theory and ideas.

Quote
I do not expect that I will be able to reliably prove that one can hear above below 16 bit, if the audio scientists were not able to.
Quote
I know that I may irritate some people here with my statements about preference of 24 bit audio but they are not primarily based on searching unicorns in higher lower bits.
Fixed these for you.

No, not "not primarily based," but solely  based -- as evidenced by your previous post.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 18:41:03
More to the notion of practical, please ponder why Meridian came up with MQA.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 18:50:56
provides (*) safety margin for common usage.
So does seeing a shrink. Maybe take a Valium.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 18:52:17
Yes it can be corrected this way, thank you ( I know that these are lower bits but their numbering is 17-24>16),  But the sense is still the same. I do not want to be seen as person who repeatedly tries to squeeze better audio experience from 24 bit containers and in that paper the majority of tests mix sample rate - bit rate testing. I do not expect that individual people can prove the benefits of 24 bit audio to the others, because even scientists with their testing equipment were not reliably able to do so. So it is up to the people if they will use 24 or 16 bit for listening - in either case they won't be wrong.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 18:55:20
So does seeing a shrink. Maybe take a Valium.
Well you can at least find some humor in his posts on hi-res, so they aren't completely without value.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 18:57:13
Well you can at least find some humor his posts on hi-res, so they aren't completely without value.

At least you are polite today ;-)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 19:03:15
it is up to the people if they will use 24 or 16 bit for listening - in either case they won't be wrong.
...and this is all your countless pages on the topic boil down to.  This forum concerns itself with the validity of facts.  The reasons people give for listening to 24 bits or greater should be what is of interest.  Aside from apples and oranges comparisons, I've seen very few, if any, that were based on something that could be objectively demonstrated.  So, yeah, I'd say people can be quite wrong in their decision to choose 24 over 16.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 19:08:31
it is up to the people if they will use 24 or 16 bit for listening - in either case they won't be wrong.
...and this is all your countless pages on the topic boil down to.  This forum concerns itself with the validity of facts.  The reasons people give for listening to 24 bits or greater should be what is of interest.  Aside from apples and oranges comparisons, I've seen very few, if any, that were based on something that could be objectively demonstrated.  So, yeah, I'd say people can be quite wrong in their decision to choose 24 over 16.

Yes, I understand your statement and I know from what position you are arguing - that is completely OK and in the sense of validity of facts you cannot do otherwise, since we today do not have any reliable facts that show the neccessity of  24 bit usage for listening purposes.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 19:11:57
Exactly.  Therefore any additional squirming you do will only be seen by me as pining for the ability to hear unicorns.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 19:24:14
At least on this level we have achieved understanding :)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 19:36:24
on this level
With this I surely hope you aren't still digging in your heels with more of this type of nonsense (https://hydrogenaud.io/index.php/topic,111271.msg916697.html#msg916697).
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-28 19:41:25
on this level
With this I surely hope you aren't still digging in your heels with more of this type of nonsense (https://hydrogenaud.io/index.php/topic,111271.msg916697.html#msg916697).

Now I am slightly  more experienced than in February :)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 20:01:27
Considering that you continued your contortions for another 20+ pages after that we can only hope so.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-28 23:15:01
More fuel for the fire:

A Meta-Analysis of High Resolution Audio Perceptual Evaluation:

http://www.aes.org/tmpFiles/elib/20160628/18296.pdf

"Results showed a small but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training."
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-28 23:25:03
Some hours ago:
Ok, Dr Reiss was kind enough to notify me this morning, it's here: http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 23:29:21
"Results showed a small but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training."
That is the contention which originated from Meridian, perhaps even a verbatim talking point.  Don't be fooled into thinking otherwise.

Anyway, like the hypersonic mumbo jumbo, there has been zero 3rd party verification.

Seriously, the discussion could just end now, but I don't think that would make krab happy.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-28 23:34:16
increased dramatically when test subjects received extensive training."
The question I have, is training at hearing what exactly?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-28 23:39:12
You know, the difference between a <cough> "typical" filter and a Meridian filter.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 02:38:40
More fuel for the fire:
A Meta-Analysis of High Resolution Audio Perceptual Evaluation:
http://www.aes.org/tmpFiles/elib/20160628/18296.pdf
"Results showed a small but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training."

When large numbers of  trials are performed, the percentages correct that are required for statistical significance can be for practical purposes vanishingly small.  For example  according to Table 2, there were a total of 12,645 trials with 6,736 or 53.27% correct responses. While there were enough trials to provide  a fairly minimal confidence level, the percentage correct is pretty dauntingly small.

Compare the actual facts based on reliable listening tests with the claims that the leading high end audio pundits  make:

http://www.theabsolutesound.com/articles/the-move-to-make-hi-res-mainstream/ (http://www.theabsolutesound.com/articles/the-move-to-make-hi-res-mainstream/)
Quote from: TAS
"After the panel discussion we separated into groups and listened to Sony’s latest hi-res players, the Sony/Whitledge Design car system, and then to various transfers of the same recording. In the Capitol Records’ control room we heard four different versions of an old mono Sinatra track, each taken from a different era (vinyl, CD, remastered CD with noise removal, and a 192/24 hi-res file). The CD and remastered CD were the worst, with the hi-res file sounding infinitely better than any of the others. The difference in sound quality was stark."

Is there anything in the quote from TAS that would be warranted by a difference that can only be heard reliably a shade over half of the time?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-29 07:33:55
While there were enough trials to provide  a fairly minimal confidence level, the percentage correct is pretty dauntingly small.

Compare [this to typical] claims that [...] high end audio pundits make:
Quote from: TAS
"[...]The difference in sound quality was stark."
Its a wonder how hi-res pundits might manage to rationalize the dichotomy, especially when Meridian enlisted trained listeners.
Were any of these "trained listeners" also hi-res pundits?
What tune are they singing, that they got statistically significant results with just ~20 trials?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-29 09:17:41
192 kHz PCM is absolute overkill. Still they are mixing bit rate and sample rate. Of course 24/192 could sound differently from CD/LP but that is not the point. On the other hand more research is needed if delivery e.g. in 24/48 could provide benefit to listening of end users - up to now we do not have any reliable evidence.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 12:14:01
192 kHz PCM is absolute overkill. Still they are mixing bit rate and sample rate. Of course 24/192 could sound differently from CD/LP but that is not the point. On the other hand more research is needed if delivery e.g. in 24/48 could provide benefit to listening of end users - up to now we do not have any reliable evidence.

One of the funnier aspects of the article can be found en by checking the footnotes. There are papers describing approaches to this problem going back to 1931, and still all the high res advocates like this author can come up with is a call for more experiments!

Even with some pretty obvious cherry picking of results, the best the author could come up with is according to Table 2, a total of 12,645 trials with 6,736 or 53.27% correct responses.  This is 3.27 % better than placebo.

In most areas of human  endeavor, people would call this sort of weak performance  a lost cause.  The call for more testing looks to me like a lame attempt to obfuscate the absence of compelling results after over a century of trying and over 10,000 trials, even after fairly obvious cherry-picking.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-29 12:17:42
How I would love to do a 192/24 blind test trial using a pono and Neil Young's own material...Where better to run the test? a hi-end audio show.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-06-29 13:04:46
How I would love to do a 192/24 blind test trial using a pono and Neil Young's own material...Where better to run the test? a hi-end audio show.

Using a Pono may actually give an edge to the hi-res pundits, as it seems to have some pretty severe high-frequency rolloff due to the filters chosen, -0.5dB at 10kHz and -5dB at 20kHz: http://archimago.blogspot.dk/2015/08/measurements-ponoplayer-another-mans.html

Using a higher sample rate on the Pono moves that rolloff outside of the audible range, so there may actually be an audible difference due to the hardware design.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-29 13:28:15
192 kHz PCM is absolute overkill. Still they are mixing bit rate and sample rate. Of course 24/192 could sound differently from CD/LP but that is not the point. On the other hand more research is needed if delivery e.g. in 24/48 could provide benefit to listening of end users - up to now we do not have any reliable evidence.

One of the funnier aspects of the article can be found en by checking the footnotes. There are papers describing approaches to this problem going back to 1931, and still all the high res advocates like this author can come up with is a call for more experiments!

Even with some pretty obvious cherry picking of results, the best the author could come up with is according to Table 2, a total of 12,645 trials with 6,736 or 53.27% correct responses.  This is 3.27 % better than placebo.

In most areas of human  endeavor, people would call this sort of weak performance  a lost cause.  The call for more testing looks to me like a lame attempt to obfuscate the absence of compelling results after over a century of trying and over 10,000 trials, even after fairly obvious cherry-picking.



I do not want to strive for Hi-Res at all costs, but those arguments are weak. Only last cca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research ....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-29 13:54:52
Using a Pono may actually give an edge to the hi-res pundits, as it seems to have some pretty severe high-frequency rolloff due to the filters chosen, -0.5dB at 10kHz and -5dB at 20kHz: http://archimago.blogspot.dk/2015/08/measurements-ponoplayer-another-mans.html

Using a higher sample rate on the Pono moves that rolloff outside of the audible range, so there may actually be an audible difference due to the hardware design.

A white paper on the filter in question:

https://www.ayre.com/white_papers/Ayre_MP_White_Paper.pdf
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 14:07:55

I do not want to strive for Hi-Res at all costs, but those arguments are weak.

I don't expect to convert any true believers in high resolution audio with logical arguments.

Quote
Only last ca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research ....

I guess you haven't noticed that we still can't do 24 bits.

More to the point, there is no practical purpose that would be satisfied by being able to do so.  In fact 16/44 is an overkill format as compared to the limitations of human hearing and musical events.

If you want to see an example of a weak argument, consider the argument that after having 24/96 gear with high performance at our disposal for 20-ish years, the best that high resolution advocates can come up with seems to be results that are less than 4% better than Placebo.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-29 14:20:54
those arguments are weak. Only last cca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research ....
Believers believe we've only been able to generate >20khz signals for 20yrs??
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-29 14:23:16
I fully agree with the point of 24 vs 16 bit, but I think 44KHz vs 192KHz is less clear cut, and is highlighted perfectly by the pono player (though its digital filters). Playing back 192KHz the filters are less-likely to screw up the actual stuff you can hear, so in many respects 192KHz has the advantage.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-06-29 14:46:09
I fully agree with the point of 24 vs 16 bit, but I think 44KHz vs 192KHz is less clear cut, and is highlighted perfectly by the pono player (though its digital filters). Playing back 192KHz the filters are less-likely to screw up the actual stuff you can hear, so in many respects 192KHz has the advantage.

But that's only really because the filters on the Pono have excessively soft rolloff. Any other even half-way competently designed DAC will only roll off maybe -0.5dB at 20kHz, compared to the -5dB on the Pono.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 14:48:40
those arguments are weak. Only last cca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research ....
Believers believe we've only been able to generate >20khz signals for 20yrs??

If you set the bar high enough, you can push forward the point in time when a certain situation was first true.

For example, the AES Audio Timeline

http://www.aes.org/aeshc/docs/audio.history.timeline.html (http://www.aes.org/aeshc/docs/audio.history.timeline.html)

says "1996 - Experimental digital recordings are made at 24 bits and 96 kHz."

I know that I was able to make high quality 24/96 recordings using B&K omni mics with flatish response up to 40 KHz and a reasonably priced pro audio recording interface card  (Card Deluxe)  ca. Y2k.

I doubt that there are many who are aware of the various work-arounds that were used to gather experimental data > 20 KHz before then.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: jumpingjackflash5 on 2016-06-29 15:16:27

I do not want to strive for Hi-Res at all costs, but those arguments are weak.

I don't expect to convert any true believers in high resolution audio with logical arguments.

Quote
Only last ca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research ....

I guess you haven't noticed that we still can't do 24 bits.

More to the point, there is no practical purpose that would be satisfied by being able to do so.  In fact 16/44 is an overkill format as compared to the limitations of human hearing and musical events.

If you want to see an example of a weak argument, consider the argument that after having 24/96 gear with high performance at our disposal for 20-ish years, the best that high resolution advocates can come up with seems to be results that are less than 4% better than Placebo.


Interesting. Just like to add that I know that SNR of most equipment is between 100-120 dB, e.g. not fully utilizing 24 bit 144 dB range. But there is also quantization error. Considering sample rates isnt 48 kHz enough for very good filtering? And it is not about generating >20kHz signal but widespread availability of 24 bit and/or 48/96 Khz DACs.

Do not want to flame about Hi-Res, BTW. That discussion is endless, but I still feel that here too restrictive approach prevails, although i fully respect that up to now we do not have any prove that it is neccessary for complete listening experience.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-29 15:31:13
Only last ca 20 years  we have real technology that can do 24 bit and/or 48/96 kHz sample rates reasonably well. So no century wide research

And it is not about generating >20kHz signal but widespread availability of 24 bit and/or 48/96 Khz DACs.
::)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: apastuszak on 2016-06-29 15:59:42
http://www.aes.org/tmpFiles/elib/20160629/18296.pdf

Still reading through it.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-06-29 17:25:46
...although i fully respect that up to now we do not have any prove that it is neccessary for complete listening experience.
This almost made me giggle.

Sane people would probably say that we have proof enough that HRA isn't necessary for a complete listening experience. But that would require a reasonable notion of what constitutes a good enough proof, and what constitutes a good enough approximation of a complete listening experience.

But that's unlikely to get accepted by audiophiles. For them, there can't ever be a proof convincing enough, and an experience complete enough. That's a matter of principle.

Heck, I'm trying to work out in my head what a 'complete listening experience' would actually be. It certainly wouldn't circle around getting presented with frequencies my ears can't hear, in the hope of me being able to perceive them in some yet unexplained way. My associations are much more in the direction of getting presented the complete soundfield rather than only a stereo approximation, and with getting all the non-acoustic stuff that contributes to an experience. I'm human after all, and I perceive through all my senses, and all this contributes to an experience.

So my answer to those wanting a complete listening experience would be: Forget it. HRA won't get you a micron closer, since the problem is somewhere entirely different. This entire HRA bandwagon is barking up the wrong tree. And for a long time already.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 17:39:56

Interesting. Just like to add that I know that SNR of most equipment is between 100-120 dB, e.g. not fully utilizing 24 bit 144 dB range.

If you include mics and mic preamps as "equipment", then 100 dB is probably the more realistic number.

Quote
But there is also quantization error

Just another influence that is about 100 dB down

Furthermore, with perceptual noise shaping, it can be the perceptual equivalent of flat noise shaped  quantization noise 120 dB down

Quote
Considering sample rates isnt 48 kHz enough for very good filtering?

Filtering only needs to be good enough to avoid audible artifacts.  Done deal @ 44 KHz.  You do have to pay a little attention to what you are doing and avoid doing things the worst possible way.

Quote
And it is not about generating >20kHz signal but widespread availability of 24 bit and/or 48/96 Khz DACs.


That's off topic. The title of the paper is: A Meta-Analysis of High Resolution Audio Perceptual Evaluation. Notice that ADCs and DACs are not the issue, but what people can hear is.

Now that we've cleared up that little problem, we see that the question is all about generating > 20 KHz signals by any of a number of reasonable means.

This is one of those places where we separate the scientists from the audiophiles. The audiophiles think of problems in terms of specific pieces of audio gear. Scientists think about things like first principles and basic physical processes.


Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: drewfx on 2016-06-29 18:20:52
I fully agree with the point of 24 vs 16 bit, but I think 44KHz vs 192KHz is less clear cut, and is highlighted perfectly by the pono player (though its digital filters). Playing back 192KHz the filters are less-likely to screw up the actual stuff you can hear, so in many respects 192KHz has the advantage.

So that means that if a create a device with good filters at 44.1 kHz and horrible filters at 192 kHz, it's a reasonable argument that 192 kHz is inferior, right?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-29 18:57:28
The misleading point of this last AES paper is that it summarizes a trained listener can distinguish High Samplerates even when letting out what this means. It repeats what people like BS wanted to spread with own papers.
Every audiophile absolutely feels himself as well trained now and all that can't hear it are not deaf but ignorant.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-29 19:34:42
The misleading point of this last AES paper is that it summarizes a trained listener can distinguish High Samplerates even when letting out what this means. It repeats what people like BS wanted to spread with own papers.
Every audiophile absolutely feels himself as well trained now and all that can't hear it are not deaf but ignorant.
Yes, I posited this dilemma in the AES comments section. Training for hearing what?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-06-29 19:44:37
Also many of these former papers had flaws that are completely left out. Alone the BS paper even when positive to blame filter ringing to be audible needs very strong additional ringing in a steep 192->44.1->192 resampling not present on normal distribution.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-29 20:38:05
I fully agree with the point of 24 vs 16 bit, but I think 44KHz vs 192KHz is less clear cut, and is highlighted perfectly by the pono player (though its digital filters). Playing back 192KHz the filters are less-likely to screw up the actual stuff you can hear, so in many respects 192KHz has the advantage.

So that means that if a create a device with good filters at 44.1 kHz and horrible filters at 192 kHz, it's a reasonable argument that 192 kHz is inferior, right?

I suppose it is possible but less likely, you see a device filtering around the 22KHz range is very close to what you can hear, a 192KHz playback device which filters above 60KHz for example is far above the audible range.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: drewfx on 2016-06-29 22:24:40
The point I was (indirectly) making is we need to keep clear whether we are talking about problems with 44.1 kHz in general or with a specific device/DAC/filter's implementation at 44.1 kHz.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-29 22:41:01
Playing back 192KHz the filters are less-likely to screw up the actual stuff you can hear, so in many respects 192KHz has the advantage.
I guess you haven't seen this, which essentially amounts to a prerequisite for this community:
http://xiphmont.livejournal.com/58294.html
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: spoon on 2016-06-29 23:17:19
I have seen that many years ago, and it is well know that almost all audio DACS over sample to precisely create some headroom for the filter to work. Which is why the pono is so puzzling.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-29 23:23:12
I suppose it is possible but less likely, you see a device filtering around the 22KHz range is very close to what you can hear
Well, if one is 12yrs old. I'm 50 and struggling by 15KHz.
I'd wager old deaf audiophiles like Neil Young driving this craze are done by 11KHz. Not quite very close IMO.
But I believe the phantom menace here is that dreaded "time smear"....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-29 23:32:13
No one is questioning the validity of oversampling as a means to help eliminate imaging; rather, this is about whether 44.1k as a delivery format is adequate for reproduction of recorded material.

There are plenty of hardware devices that get it wrong.

There have been countless discussions on these things going all the way back to this forum's infancy.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-29 23:38:46
The misleading point of this last AES paper is that it summarizes a trained listener can distinguish High Samplerates even when letting out what this means. It repeats what people like BS wanted to spread with own papers.
Every audiophile absolutely feels himself as well trained now and all that can't hear it are not deaf but ignorant.
Yes, I posited this dilemma in the AES comments section. Training for hearing what?

Looking at the results in the paper that made it though the author's cherry-picking. It appears that there were 12,645 trials half A and half B. If the listeners were guessing purely randomly, they would have obtained correct identification 6, 3,22 times.  They did a trifle better than that and obtained correct identification 6, 736 times.  IOW 414 times out of 12,545 trials or about once each 30 trials the listeners  actually heard a difference.  The author seems to have provided no information about the training nor any kind of indication what the training actually accomplished.

I'll give the author the benefit of the doubt, and estimate that the listeners  accuracy was doubled. IOW they now provided a correct response for every 15 trials. Trouble is, training people to do something that they fail so often is very difficult and frustrating. 
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-29 23:48:00
The author didn't perform the test, but is rather sifting through the data of
http://www.aes.org/e-lib/browse.cfm?elib=17497
among other tests.

I am certain he has no answer, besides the point that AJ's question was rhetorical.  I'm afraid that only our friends at Meridian and the people who were involved in the process know the answer.  I wouldn't hold my breath waiting for any further information.

Like I previously intimated, they have their slam dunk and are now busy selling licenses for MQA.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-30 01:00:16
have seen that many years ago, and it is well know that almost all audio DACS over sample to precisely create some headroom for the filter to work.

The primary  purpose of oversampling relates to improving dynamic range. 

It is very feasible to have a digital filter that is very effective and also does not use oversampling.  IOW it operates at the identical same clock frequency as the data it processes.

It is also possible to obtain good measured and audible performance out of brick wall filters that are entirely implemented in the analog domain, it is just that they are very expensive to make properly.  There were used in the first several generations of digital audio recorders such as the Sony PCM 1610[

http://www.realhomerecording.com/docs/Sony_PCM-1610_brochure.pdf (http://www.realhomerecording.com/docs/Sony_PCM-1610_brochure.pdf)

There is about 6 KHz between the point where a sharp low pass filter "brickwall" ceases to have audible effects and the Nyquist frequency associated with 16/44 digital.  Its not all that tight.

Quote
Which is why the pono is so puzzling.

They are just taking advantage of the fact that digital filters can be designed to be sonically transparent or not, depending on the will of the designer. The Pono is not unique in possessing digital filters with audible effects.  Its the same basic philosophy as SET power amps.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-06-30 10:37:50
Furthermore, with perceptual noise shaping, it can be the perceptual equivalent of flat noise shaped  quantization noise 120 dB down.
Even without taking perceptual noise shaping into account, the oft-repeated argument that human hearing can span a dynamic range of up to 120 dB, hence we need 20 bit, is misguided. Reiss also doesn't fail to mention this flawed argument:
Quote
It is well-known that the dynamic range of human hearing (when measured over a wide range of frequencies and considering deviations among subjects) may exceed 100 dB. Therefore, it is reasonable to speculate that bit depth beyond 16 bits may be perceived.
The flaw with this argument is in two different and incompatible notions of dynamic range, i.e. it is a case of comparing apple with orange:

In digital audio, the dynamic range is defined as the difference between a full scale sine wave and the noise floor. Note that a sine wave is being compared with noise, which makes it dubious already.

In human hearing, dynamic range is defined as the difference between the loudest and the softest sine wave that can be heard. Hence the frequency dependency, i.e. Fletcher-Munson. Note that there's no mentioning of noise here. Of course the loudest sine wave is a weakly defined term, since there is no hard limit. You have to put the limit somewhat arbitrarily, depending on the damage level and distortion you are willing to accept.

It should be well known that human hearing can hear sine waves buried in the noise floor. That's not as astonishing as some would think, because any frequency-selective measurement can do the same. It is all about measurement bandwidth. Where the ear has its bark bands, a measurement instrument could use a filter bank, or use FFT. The result is that the noise floor as calculated from the number of bits is not the limit of audibility for a tone. Hence you can't apply digital audio wordlengths directly to Fletcher-Munson curves.

If one wanted to do a fair comparison, one would have to relate the noise floor of digital audio with the noise floor of the human ear. You can't glean that from the Fletcher-Munson curves. The picture would be rather different. You'd suddenly find that (surprise!) 16 bits are sufficient.

I have seen that many years ago, and it is well know that almost all audio DACS over sample to precisely create some headroom for the filter to work.
It would be better to say: ...to create some headroom to make the analog filter simpler.

It's not that without oversampling the filter wouldn't work. Numerous implementations show that it would, given enough care in designing it. Oversampling is a way of putting some of the reconstruction work into the digital domain, so that less of it needs to be done in the analog domain. The cheaper digital circuitry becomes, compared to analog circuitry that does the same job, the more economic sense oversampling makes. Given the fast advance in digital technology, you shouldn't be surprised that over time, the economic advantage of digital technology became greater and greater, so the push to do more and more of the reconstruction on the digital side got bigger and bigger. The sigma-delta technology is the pinnacle of this. You can today get converter chips for a dollar which contain everything except a passive RC-filter needed on the output, which produce a fidelity that you would never be able to match with a filter that's 100% in the analog domain.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-30 14:27:08
Furthermore, with perceptual noise shaping, it can be the perceptual equivalent of flat noise shaped  quantization noise 120 dB down.
Even without taking perceptual noise shaping into account, the oft-repeated argument that human hearing can span a dynamic range of up to 120 dB, hence we need 20 bit, is misguided.

Agreed.  The oft-quoted source for  measurements showing that the ear has > 120 dB dynamic range is often " Louis D. Fielder. May 1981. Dynamic Range Requirement for Subjective Noise Free Reproduction of Music"  That argument and others are based on the idea that the human ear can hear a pure tone at a frequency where it is most sensitive with an amplitude of about - 6 dB. The threshold of pain is usually given as 120 dB SPL or so, so the ear's dynamic range must be ca.  126 dB and we need reproduction systems capable of doing this.

Engaging the real world points out a number of flaws when it is applied to recordings of real world that are listened to for pleasure.

The first problem I've noticed is that listening to very loud music causes the ear to experience short and intermediate term insensitivity, and listening to music that peaks in the 110 dB and greater for any but the briefest periods of time range desensitizes the ear to the point where hearing normal speech (ca. 60 dB SPL) , let alone sounds near the threshold of hearing, can be difficult or impossible.  If you use these experiences to estimate the dynamic range of the human ear, it is about 60 dB.

Careful reading of Fielder's paper shows that his reference sounds for judging  loud music was frequently based on electronically amplified drums.  Years of experience doing technical setup and mixing for live concerts with typical electronically-amplified instruments revealed to me  the fact that the noise floor of the electronic equipment sound reinforcement equipment being used is far from being SOTA, and sometimes clearly audible in the paying seats during the concert.   The peak levels may be > 120 dB SPL, but the noise floor may again  be > 60 dB.

Measuring the loudest sound that can be heard one day to the softest sound that may be heard on some other day in some other place presumes that the device being measured is free of nonlinear effects such as intermodulation distortion, dynamics compression and unaffected by the sounds being listened to no matter how loud. Those must be some golden ears because I have never encountered them in real life.

Quote
Reiss also doesn't fail to mention this flawed argument:
Quote
It is well-known that the dynamic range of human hearing (when measured over a wide range of frequencies and considering deviations among subjects) may exceed 100 dB. Therefore, it is reasonable to speculate that bit depth beyond 16 bits may be perceived.
The flaw with this argument is in two different and incompatible notions of dynamic range, i.e. it is a case of comparing apple with orange:

In digital audio, the dynamic range is defined as the difference between a full scale sine wave and the noise floor. Note that a sine wave is being compared with noise, which makes it dubious already.

In human hearing, dynamic range is defined as the difference between the loudest and the softest sine wave that can be heard. Hence the frequency dependency, i.e. Fletcher-Munson. Note that there's no mentioning of noise here. Of course the loudest sine wave is a weakly defined term, since there is no hard limit. You have to put the limit somewhat arbitrarily, depending on the damage level and distortion you are willing to accept.

It should be well known that human hearing can hear sine waves buried in the noise floor. That's not as astonishing as some would think, because any frequency-selective measurement can do the same. It is all about measurement bandwidth. Where the ear has its bark bands, a measurement instrument could use a filter bank, or use FFT. The result is that the noise floor as calculated from the number of bits is not the limit of audibility for a tone. Hence you can't apply digital audio wordlengths directly to Fletcher-Munson curves.

If one wanted to do a fair comparison, one would have to relate the noise floor of digital audio with the noise floor of the human ear. You can't glean that from the Fletcher-Munson curves. The picture would be rather different. You'd suddenly find that (surprise!) 16 bits are sufficient.

Agreed.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-06-30 15:24:26
The first problem I've noticed is that listening to very loud music causes the ear to experience short and intermediate term insensitivity, and listening to music that peaks in the 110 dB and greater for any but the briefest periods of time range desensitizes the ear to the point where hearing normal speech (ca. 60 dB SPL) , let alone sounds near the threshold of hearing, can be difficult or impossible.  If you use these experiences to estimate the dynamic range of the human ear, it is about 60 dB.

Not to mention the permanent hearing damage associated with listening to music at these sound pressure levels. Unless that's what you meant by desensitizing?

I've certainly been a lot better at remembering my earplugs since:

1) I developed tinnitus, which sucks. Young people, wear your goddamn earplugs!
2) I had my hearing tested and discovered hearing loss around 3-4kHz and above ~14kHz on my left ear and above ~16kHz on my right ear (and I'm only 30yo)
3) I happened to stand beside the mixing desk at an indoor concert and noticed that the normal sound level was around 105dB with peaks all the way up to 120dB. I hadn't really thought about it before.

Most of my friends don't wear earplugs to concerts, and I expect them to be almost completely deaf around the age of 50 or so. A lot them have the attitude that "if it's too loud, you're too old", but I think it should be the other way around.

As an aside, most audiophiles who claim to be able to hear super-high frequencies tend to be >50yo, so it's just painfully obvious that they're just bullshitting.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-06-30 16:25:15
The oft-quoted source for  measurements showing that the ear has > 120 dB dynamic range is often " Louis D. Fielder. May 1981. Dynamic Range Requirement for Subjective Noise Free Reproduction of Music"  That argument and others are based on the idea that the human ear can hear a pure tone at a frequency where it is most sensitive with an amplitude of about - 6 dB. The threshold of pain is usually given as 120 dB SPL or so, so the ear's dynamic range must be ca.  126 dB and we need reproduction systems capable of doing this.
I reviewed this article, and I think you remember it wrongly. Fielder actually refers to just noticeable noise levels, rather than just noticeable tone levels. So he actually does compare apples to apples.

In order to reach 118 dB of required dynamic range, he had to take the most extreme percussive classical music, close-mike it, and assume the most acute listener in detecting a noise floor increase. You'd have to have the very quietest part, where noise floor differences might be heard by a few people, before the loud part, because after the loud part nobody would detect such noise floor differences anymore.

But since he assumed dither with no noise shaping, and since the sensitivity to background noise is best between 3 and 7 kHz, you still have the possibility to use noise shaped dither to give you the desired dynamic range with 16 bit, even though the case is already unrealistic and extreme.

If Fielder's work is used to justify 20 bit or 24 bit for professional gear used in recording and production, I'm all for it. But that's what we have today, anyway. As an argument for HRA for distribution to consumers, it doesn't fit. He doesn't say so, either, so I don't blame him.

You can get the dynamic range required for all realistic cases from the CD if you want to, and if you know what you are doing. This would even apply when you really wanted to release a disk with no compression at all, which hardly anybody does, even for classical concerts.

In this sense, I agree with you.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-06-30 19:29:57
"Results showed a small but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training."
That is the contention which originated from Meridian, perhaps even a verbatim talking point.  Don't be fooled into thinking otherwise.

Anyway, like the hypersonic mumbo jumbo, there has been zero 3rd party verification.

Seriously, the discussion could just end now, but I don't think that would make krab happy.

Who, me?

Meta analysis is of course extremely dependent on the corpus of work used in the analysis.  What is included, what is left out, what assumptions are made.

Dr. Reiss does seem to have the background to understand statistics (PhD physics, with a focus on chaotic events).  But he does seem to have missed some of the literature though, given that he thinks his discussion of Type II errors in audio testing is a new thing when it's not (http://www.aes.org/e-lib/browse.cfm?elib=5265)

His take on Meyer and Moran is irritating, though (for the most part*) hardly novel.  Basically 'not sciency enough', but then again, M&M were merely allowing golden ears to do what they always do when they claim to hear 'hi rez' magic, while adding a blinding step.  Using the same kinds of recordings already reported to be 'magic' by them.  So who cares if the recordings  didn't actually have hi rez content?

My response to the overall 'implication' of this MA -- that in rare cases, some small number of people with documented training appear to be  hearing 'something' -- remains what it always has been to such 'findings':  so f*cking what?  That finding is NOT what audiophile/high end mavens and cheerleaders claim.  They typically say the difference is 'obvious', 'my wife could hear it' etc.  They consider themselves to be 'self trained', and use patently useless methodology.    Dr. Reiss gives nary a  word to that, despite the near certainty that Stereophile. Bob Stuart,  and the rest of the cheering squad will tout these results without the qualifications required by science.

My other take on these results would be  along the lines of his discussion paragraph, where he basically says: more replication of 'interesting results' is needed.  Though he seems to assume that more rigorous work would merely strengthen the implication of his meta-analysis.  I'm not so sure.



*I was gobsmacked to see him recite this argument though:  " the encoding scheme on SACD obscures frequency components above 20 kHz and the SACD players typically filter above 30 or 50 kHz"
So THAT's a reason now  why M&M wasn't a good test of audiophile claims? Give me a f*cking break!
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-06-30 19:47:14
Furthermore, with perceptual noise shaping, it can be the perceptual equivalent of flat noise shaped  quantization noise 120 dB down.
Even without taking perceptual noise shaping into account, the oft-repeated argument that human hearing can span a dynamic range of up to 120 dB, hence we need 20 bit, is misguided.

Agreed.  The oft-quoted source for  measurements showing that the ear has > 120 dB dynamic range is often " Louis D. Fielder. May 1981. Dynamic Range Requirement for Subjective Noise Free Reproduction of Music"  That argument and others are based on the idea that the human ear can hear a pure tone at a frequency where it is most sensitive with an amplitude of about - 6 dB. The threshold of pain is usually given as 120 dB SPL or so, so the ear's dynamic range must be ca.  126 dB and we need reproduction systems capable of doing this.

It may be oft-quoted  but it was superseded by Fielder 1994

http://www.aes.org/e-lib/browse.cfm?elib=10206

free download: 
http://www.aes.org/e-lib/inst/download.cfm/10206.pdf?ID=10206
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-06-30 20:00:29
The oft-quoted source for  measurements showing that the ear has > 120 dB dynamic range is often " Louis D. Fielder. May 1981. Dynamic Range Requirement for Subjective Noise Free Reproduction of Music"  That argument and others are based on the idea that the human ear can hear a pure tone at a frequency where it is most sensitive with an amplitude of about - 6 dB. The threshold of pain is usually given as 120 dB SPL or so, so the ear's dynamic range must be ca.  126 dB and we need reproduction systems capable of doing this.
I reviewed this article, and I think you remember it wrongly. Fielder actually refers to just noticeable noise levels, rather than just noticeable tone levels. So he actually does compare apples to apples.

In order to reach 118 dB of required dynamic range, he had to take the most extreme percussive classical music, close-mike it, and assume the most acute listener in detecting a noise floor increase. You'd have to have the very quietest part, where noise floor differences might be heard by a few people, before the loud part, because after the loud part nobody would detect such noise floor differences anymore.


in a *second*, more extensive study, published in 1985, Fielder used a mic at 'favored listening locations' in an audience, and still measured peaks ranging from 90-129dB (classical topped out at 118dB, it was a rock show that generated 129)

please see:

http://www.aes.org/tmpFiles/elib/20160630/10206.pdf

which summarizes results of his and others' work,

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-06-30 22:03:21
The paper is open access, but can non-members see the comments?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-06-30 22:18:12
in a *second*, more extensive study, published in 1985, Fielder used a mic at 'favored listening locations' in an audience, and still measured peaks ranging from 90-129dB (classical topped out at 118dB, it was a rock show that generated 129)
There certainly are people at rock concerts whose favored listening location leads to such levels, but I very much doubt they care much about just noticeable noise levels. ;-)

His approach apparently consists of finding the extremes in both directions, no matter whether they apply to the same case or not. The difference is the required dynamic range for him.

Well it certainly makes life simpler if your sound system has this dynamic range throughout, because it relieves the sound engineer of having to tweak the levels to use the available dynamic range wisely. You could basically calibrate your mic-pre with the mic's sensitivity, and regardless of what you record, it'll always be ok. With contemporary converter technology, I'd posit that we are approximately there, if his numbers are to be taken as the gospel.

It has very little to do with what you need in a carrier for distribution to the consumer, however. Whoever uses his numbers to justify HRA as a distribution format changes the context significantly.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-06-30 22:57:58
The paper is open access, but can non-members see the comments?

If you mean the Reiss paper, I saw yours (and Reiss's reply), as a nonmember.

direct link to comments:
https://secure.aes.org/forum/pubs/journal/?ID=591
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-06-30 23:41:34
The oft-quoted source for  measurements showing that the ear has > 120 dB dynamic range is often " Louis D. Fielder. May 1981. Dynamic Range Requirement for Subjective Noise Free Reproduction of Music"  That argument and others are based on the idea that the human ear can hear a pure tone at a frequency where it is most sensitive with an amplitude of about - 6 dB. The threshold of pain is usually given as 120 dB SPL or so, so the ear's dynamic range must be ca.  126 dB and we need reproduction systems capable of doing this.
I reviewed this article, and I think you remember it wrongly. Fielder actually refers to just noticeable noise levels, rather than just noticeable tone levels. So he actually does compare apples to apples.

Yes, I remembered the comparison incorrectly, thanks for the correction.  However, my mistakes don't make other people's choices correct. 
Quote
In order to reach 118 dB of required dynamic range, he had to take the most extreme percussive classical music, close-mike it, and assume the most acute listener in detecting a noise floor increase. You'd have to have the very quietest part, where noise floor differences might be heard by a few people, before the loud part, because after the loud part nobody would detect such noise floor differences anymore.

This procedure is  only valid if the device being evaluated  is distortion-free. That is,  no dynamic compression,, no modulation noise, no IM, in short no nonlinear distortion of any kind, and no memory effects.  These would be truly golden ears!

Since human ears are well known to have serious problems of these kinds and many more, Fielder's study relates to  mythical perfect ears, not actual human ears.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-06-30 23:50:36
he thinks his discussion of Type II errors in audio testing is a new thing
I'd like to point out that "concerns" about Type II errors seem to be part of the audiophile pundit narrative.  In so many cases audiophile pundits don't seem to understand that when they make claims of the existence of something it should be their job to show proof of their existence.  Rather, they spend an inordinate amount of time waving away any and all failed attempts at finding unicorns as flawed because no unicorns were found.

His take on Meyer and Moran is irritating, though (for the most part*) hardly novel.
Perhaps he could have left it out, just as his meta-analysis should not pay any lip service to the notion that the BS "typical" filters paper adequately demonstrated that people detected quantization effects.

My response to the overall 'implication' of this MA -- that in rare cases, some small number of people with documented training appear to be  hearing 'something'
Yes and this deserves attention, otherwise we're just waving hands about statistics over what  exactly?  I'll gladly call it as I see it: a speculative wank fest over nebulous fluff.  It certainly hasn't been shown that this ~55% success rate couldn't possibly have been caused by distortion occurring well downstream from the DSP chain.  And this is then used by pundits as evidence that there might be something to all those claims of veil-lifting, night and day differences emanating from tests that completely fail to account for false positives.  Give me a fucking break!

My other take on these results would be  along the lines of his discussion paragraph, where he basically says: more replication of 'interesting results' is needed.  Though he seems to assume that more rigorous work would merely strengthen the implication of his meta-analysis.
I think we need to question why this meta analysis came out not so long after the publication of an "award-winning" paper and subsequent follow-up release of new technology by Meridian.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Porcus on 2016-07-01 08:27:34
As one should know on this forum, there are quite a few studies that indicate that some listeners do detect 24 kHz - and at least one that reports three 28 kHz-capable ears:
(http://scitation.aip.org/docserver/fulltext/asa/journal/jasa/122/3/1.2761883.online.f2.gif) (http://scitation.aip.org/docserver/fulltext/asa/journal/jasa/122/3/1.2761883.online.f3.gif)
... at 100 dB (which means 98 dB over a 2 dB noise floor). Reference [38] in Reiss, open access: http://scitation.aip.org/content/asa/journal/jasa/122/3/10.1121/1.2761883

So if your favourite tune is a 100 dB pure-tone beep, then go ahead spend your golden ears on it. They won't last for too long.
(AFAIunderstand, Ashihara is mainly concerned about damage, not about pleasure.)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-01 08:59:06
I think we need to question why this meta analysis came out not so long after the publication of an "award-winning" paper and subsequent follow-up release of new technology by Meridian.
Perhaps the author himself can contribute a hint (https://www.sciencedaily.com/releases/2016/06/160627214255.htm):
Quote
Audio purists and industry should welcome these findings -- our study finds high resolution audio has a small but important advantage in its quality of reproduction over standard audio content.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-01 15:33:07
I think we need to question why this meta analysis came out not so long after the publication of an "award-winning" paper and subsequent follow-up release of new technology by Meridian.
Perhaps the author himself can contribute a hint (https://www.sciencedaily.com/releases/2016/06/160627214255.htm):
Quote
Audio purists and industry should welcome these findings -- our study finds high resolution audio has a small but important advantage in its quality of reproduction over standard audio content.


Small, yes (if at all).  'Important', that's his spin. 

AFAIK, the paper has been in the works for awhile.  
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-01 16:14:00
*I was gobsmacked to see him recite this argument though:  " the encoding scheme on SACD obscures frequency components above 20 kHz and the SACD players typically filter above 30 or 50 kHz"
So THAT's a reason now  why M&M wasn't a good test of audiophile claims? Give me a f*cking break!
This hints to the need of MQA.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-01 18:46:23
Quote
"Audio purists and industry should welcome these findings -- our study finds high resolution audio has a small but important advantage in its quality of reproduction over standard audio content."

Small, yes (if at all).  'Important', that's his spin. 

Yes. There is absolutely nothing in the paper analysis, stated or otherwise, about any "audio advantage".
That is, I'm afraid, pure bollocks.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: StephenPG on 2016-07-01 18:50:35
Quote
"Audio purists and industry should welcome these findings -- our study finds high resolution audio has a small but important advantage in its quality of reproduction over standard audio content."


Straws, being desperately clutched at?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-01 19:00:41
Straws, being desperately clutched at?
The university press release, which contains the quote, was picked up and repeated by numerous "sciency" internet media, not just the usual audiophile hangouts. Very few of the readers are likely to actually read the paper, let alone understand the matter.

The audiophile scene has learned to work the system. Similar to the climate change deniers.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-02 12:01:08
The press release is pure belief (or other), unsupported by the paper. I asked him about this dichotomy and got nothing but waffling. Some crap about Hi Rez sounding more like the "real thing". I guess things like tweeter IM or any system/test generated artifact, is "more real" to believers.
If the alarm bells weren't ringing before.....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-02 15:10:05
The press release shows beyond reasonable doubt (IMHO) that Reiss is well aware of the politics of the matter, and is willing to cater to the audiophile agenda, no matter whether his research actually supports any of the audiophile claims.

The audiophile mouthpieces would of course have hailed this study no matter how Reiss himself advertises it, so he could have chosen to remain restrained and stick to the facts, without much difference to the audiophile reception. The bystanders, however, are more likely to accept the message when it comes from the researcher himself. This is what worries me. I think this amounts to a gradual breakdown of ethics in science. It discredits the researcher, even if the actual research should be sound.

But it also discredits science as a whole. It adds to the perception of a lot of people that science is merely a tool in an ideological war. That for every scientific study you can have a counter-study showing the opposite. That the right scientific result can be bought at a moderate price. And that the fake is so hard to detect that you effectively have to trust somebody, so it becomes a matter of subjective preference which version of the "truth" you believe, which of course perverts the very point of science.

Aren't we seeing everywhere how the truth is getting buried under an avalanche of bullshit?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-02 15:28:44
The audiophile mouthpieces would of course have hailed this study no matter how Reiss himself advertises it, so he could have chosen to remain restrained and stick to the facts, without much difference to the audiophile reception.
Oh they hail, but with unintended consequences too: http://www.audiostream.com/content/its-official-people-can-hear-high-res#PmwEfJPiEh1cBdBR.97 (http://www.audiostream.com/content/its-official-people-can-hear-high-res#PmwEfJPiEh1cBdBR.97)
Can't wait to see the triumphant celebrations here http://www.stereophile.com/content/listening-143#kUicFj5BeZwHR4KW.97 (http://www.stereophile.com/content/listening-143#kUicFj5BeZwHR4KW.97)

cheers

AJ
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Porcus on 2016-07-02 18:39:39
I guess things like tweeter IM or any system/test generated artifact, is "more real" to believers.

Is it even controversial to claim that (edit) including ultrasonic components may cause audible IM distortion?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-03 08:02:01
Oh they hail, but with unintended consequences too:
Don't know what consequences you mean. The comments?

Funny to see Lavorgna using the opportunity to make a completely unappropriate connection to a paper about Nyquist. He just hasn't a clue what he's talking about, and can't suppress the urge to use everything he can get hold of as a tool against the objectivist side. That's what we have come to expect of him.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-03 11:28:06
Is it even controversial to claim that (edit) including ultrasonic components may cause audible IM distortion?
No, it is mentioned in the paper...which concluded that there is no known cause/correlation for the discrimination.
Unfortunately the AES site is undergoing maintenance this weekend, will have to wait to access all the "made final cut" papers, to assess each in detail, see exactly what was "heard". Familiar enough with nonsense like Ooashi and BS, not with some others or just too far back to recall.

Don't know what consequences you mean. The comments?
Yes, mine, at the very bottom. No response of course. ;) ..from this Mike L http://www.audiostream.com/content/avsaix-high-resolution-audio-test#6o6sOfEk4wAhCBlV.97 (http://www.audiostream.com/content/avsaix-high-resolution-audio-test#6o6sOfEk4wAhCBlV.97)

cheers,

AJ
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-06 16:40:32
Has anybody had a closer look at the studies that formed the basis for this meta-analysis?

I just started looking into this, and the first (oldest) study makes me wonder already. The study was performed in Germany by Georg Plenge et.al. and described in an AES journal paper (http://www.aes.org/e-lib/browse.cfm?elib=4000) in March 1980. From what I glean from the paper, this was a study made in the analog domain, so no sampling at any frequency was involved. Different bandwidth limiting lowpass filters were compared with each other, to see whether people could distinguish between different cutoff frequencies and filter slopes. Seven different filters were used, with cutoff frequencies between 15 kHz and 20 kHz, and with orders between 7th and 13th, some with group delay correction and some without.

In other words, the bandwidths tested all fall into the range supported by even 44.1 kHz sampling. It doesn't seem to me that the study is relevant to the topic HRA at all. I wonder what business it has appearing in this meta analysis. What am I missing?

Amusingly, the authors conclude their study saying that 15 kHz bandwidth is quite enough for broadcast transmission; the properties of hearing don't justify going to 20 kHz bandwidth. I wonder how they would react if they learned that their study now, more than 35 years later, serves as part of a "proof" that even 20 kHz isn't enough.

Note that Reiss, in his Table 2, lists this study as having a high confidence of having detected differences, with 52.98% correct answers from 2580 total trials. In other words, it contributes substantially to his result, not the least because of the highest number of trials of all studies.

Is this really as bad as it now looks to me? I can hardly believe my eyes! Please show me where my fault is!
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-06 17:58:55
Oh they hail, but with unintended consequences too:
Don't know what consequences you mean. The comments?

Funny to see Lavorgna using the opportunity to make a completely unappropriate connection to a paper about Nyquist. He just hasn't a clue what he's talking about, and can't suppress the urge to use everything he can get hold of as a tool against the objectivist side. That's what we have come to expect of him.

Exactly. I first encountered him on the SP conference site. It's hard to win or change minds when they make up or misappropriate whatever facts they need at the moment.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-06 18:00:23
I answer to myself, since I now understand a bit better what data Reiss extracted.

The trials in the Plenge study were actually conducted between filter on and filter off, so higher bandwidth than 20 kHz actually was part of the test. Reiss only used the data from 3 out of 7 filters, namely those with 20 kHz cutoff, disregarding those with lower cutoff frequencies. So he didn't distinguish on filter type.

This isn't quite as hilarious as it appeared to me at first, but it is still dubious, for several reasons.

First, the data shows that one of the three filters had a significantly better detection probability than the other two. It was a 9th order cauer filter with group delay correction. This may be a hint that this particular filter had audible properties that didn't relate to the cutoff frequency. It could have been passband ripple or something. Bunching those together to form a single detection probability seems inappropriate to me, because Reiss appears to believe that the differences between those three filters are of no consequence for their audibility.

Second, the test signal was artificial and with very high content of harmonics. Reiss believes that this test signal may not "capture whatever behavior might cause perception of high resolution content". I don't see him providing any argument for that. It looks curious to me, given that he doesn't claim to know what causes HRA to be perceived. Plenge et.al. do provide some reasoning why they chose such a test signal, saying that it presents a worst case scenario that mimics the situation when people listen to music with severely boosted treble. It certainly exaggerates the high frequencies, up into the ultrasonic range, so one would assume that there would be sufficient content to be perceived if it really should be perceivable.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-06 18:53:07
While I'm at it, I found that barely two years earlier, in April 1978, another study very similar to the one conducted by Plenge et.al. was described in the JAES. It was conducted by Muraoka et.al. in Japan and yielded very similar results. They also compared filters with different cutoff frequencies from 14 to 20 kHz with the unfiltered signal, and came to the conclusion that a filter cutoff frequency from 16 kHz upwards wasn't being detected anymore. They used music instead of an artificial signal, but they also selected material with high harmonic content, i.e. crashing cymbals. The listeners were 30 audio professionals.

A followup study by Muraoka published in 1981 was included by Reiss. Even though the results of their previous study was included in their second paper for comparison, the results taken by Reiss only include those from the second study, which seems somewhat arbitrary to me. The reason would be the year of publication, but given that this is actually a series of studies with very similar setup, I would have thought that this is particularly interesting for a meta study.

A second experiment with sweeping tones was disregarded by Reiss.

An interesting side note is inspired by Art Dudley from Stereophile. He wrote (http://www.stereophile.com/content/listening-143): "If you want a good laugh, go on the Internet and dig up Vol.26 No.4 (April 1978) of the Journal of the Audio Engineering Society, in which various engineers weigh in on the topic of sampling-rate standardization. Two things emerge: The righteous insistence that the world will never require a sampling rate higher than 44.1kHz, and the complete and utter lack of reference to actual listening." This is the exact issue of the JAES where the abovementioned study appeared. Utter lack of reference to actual listening? Heck, the study even used real music, and it had 30 audio professionals actually listening to it! If this is any hint as to how well Art's visual perception works, I can understand why he's emphasizing hearing so much. He's got something to compensate for...
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-06 19:05:06
The Pras Pras 2010 (http://www.aes.org/e-lib/browse.cfm?elib=15398) study was very good described by Werner here: So the results of this paper can only be tagged 'inconclusive'... (http://www.pinkfishmedia.net/forum/showpost.php?p=2307803&postcount=9)

Another interesting comment at Archimagos blog: On the meta-analysis paper: The Theiss/Hawksford study should have been eliminated... (https://archimago.blogspot.de/2016/07/musings-digital-interpolation-filters.html?showComment=1467591699806#c443032607935372257)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-06 19:20:03
The Pras Pras 2010 (http://www.aes.org/e-lib/browse.cfm?elib=15398) study was very good described by Werner here: So the results of this paper can only be tagged 'inconclusive'... (http://www.pinkfishmedia.net/forum/showpost.php?p=2307803&postcount=9)
Reiss actually lists it with a probability of 0.1462, which wouldn't qualify as reliable detection when taken on its own.

Quote
Another interesting comment at Archimagos blog: On the meta-analysis paper: The Theiss/Hawksford study should have been eliminated... (https://archimago.blogspot.de/2016/07/musings-digital-interpolation-filters.html?showComment=1467591699806#c443032607935372257)
I concur with that comment.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: rrod on 2016-07-06 21:07:45
My thread on head-fi went nowhere (surprise surprise), so might as well schlep into here.

I took Reiss's data at face value and plopped it into a random-intercept logistic regression, using "training" as a study-level predictor. Sure enough "training" is significant at an insanely low level, but you can kind of tell that just by eyeballing the data. Under this model, though, the intercept is not significant at the 0.05 level, so you cannot say that untrained individuals did significantly better than a coin flip. Sure enough the Theiss data stick out as having the biggest difference between the model fit and the observed proportion. If I split out the data by "training" and run separate beta-binomial models, there is again nothing to suggest that 0.5 is an unexpected result for a study of non-trained participants, and Theiss again sticks out like a sore thumb. There's really no ammo in here for casual hi-res proponents to use, unless they happen to do blind tests after training (whatever this training is)...
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-06 21:40:40
You seem to be quite adept at statistics. Maybe you can help me.

Mixing tests for frequency and bit depth seems to me like mixing tests for two different medications. If you squeeze out a bit of significance this way, what does this actually mean? That either medication is effective? Or both? Or that you still can't say?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: rrod on 2016-07-06 22:50:32
You seem to be quite adept at statistics. Maybe you can help me.

Mixing tests for frequency and bit depth seems to me like mixing tests for two different medications. If you squeeze out a bit of significance this way, what does this actually mean? That either medication is effective? Or both? Or that you still can't say?

That sounds a bit mixed up, in that the ability to differentiate is an outcome rather than a treatment. The medication here would seem to be the training, and sample rate and bit-depth differentiations are cancers. It's certainly possible for one medication to affect multiple cancers, and it's certainly possible to model the possible remission outcomes ({0,0}, {1,0}, {0,1}, {1,1}) in a single model. He seems to punt on the bit-depth question here, though.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: includemeout on 2016-07-07 10:00:44
[...] the intercept is not significant at the 0.05 level, so you cannot say that untrained individuals did significantly better than a coin flip.
In anyone else's opinion, (apart from them audiophools) that alone speaks books as to why this subject is, and always be - to those interested in science and not myths, that is - the same old dead-horse flogging of before. But audiophools are, as usual, more than happy to try raise said equine from the dead.  ::)

 In my own personal case, if anything, it only makes me stand by my own HA avatar and not to change it whatsoever in the foreseeable future. :-D

My thread on head-fi went nowhere (surprise surprise)
That, as any sensible person have come to learn, is just an utter exercise in futility; akin to Sisyphus’s job: that Greek mythology’s entity who’s always pushing a bolder uphill, only to see it roll downhill again, every time he nears the hilltop.

But looking at it from an audiophool’s perspective, it’s certainly not a pleasant experience to have pure statistics proving that, all the time and money you’ve spent on that gold-plated coin is actually a total waste, and goes against proven scientific methods.

Hence their blindly refusing to acknowledge such methods - as keenly as a medieval peasant would, if they were shown a smartphone - (or even a typewriter, for that matter), and carry on claiming their very sample of the afore-mentioned coin has been providing them with more heads than tails, or vice-versa.
So, in the end, their hocus-pocus cult will always win within their circles, according to their sad, self-indulgent opinion.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-07 10:56:52
The medication here would seem to be the training, and sample rate and bit-depth differentiations are cancers.
I rather have considered the bit-depth and the samplerate as two different medications for the "illness" that you could perhaps call "audible non-transparency".

Quote
It's certainly possible for one medication to affect multiple cancers, and it's certainly possible to model the possible remission outcomes ({0,0}, {1,0}, {0,1}, {1,1}) in a single model. He seems to punt on the bit-depth question here, though.
If he'd focus on bit-depth, several of the studies he looked at would be irrelevant.

The overall question inherent in HRA is, whether increasing the samplerate and/or bit-depth audibly improves the quality of playback. The secondary question is whether the improvement, should there be any, is perceptible in enough situations and by enough people to make it worthwile to support in the consumer market. The secondary question isn't addressed in the study, of course, but the press release makes it clear that it is on the researcher's mind.

My question is whether studies which administer two different medications, some only one, some both combined, to cure one illness, can be combined in a meta-study, and what this means for the result. I understand Reiss like this: He says that the medications work, but he doesn't say which one. That's a curious result to me. I wonder what it means. Does it mean that the medications work when administered together? Does it mean that either of the two works alone? Does it mean anything at all?

And another question is on my mind: Reiss tries to judge how much each individual study was subject to errors. Some studies are labelled as neutral, others as more prone to Type I errors, the remainder as prone to Type II errors. I don't see how this influenced his results, however. Can this be factored into the result somehow? Would it be wise to do so, given that this judgment will be somewhat speculative? What if the studies really had such errors, can the impact on the overall result of the meta-study be controlled?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-07 17:09:02
So, in the end, their hocus-pocus cult will always win within their circles, according to their sad, self-indulgent opinion.
Now there is the study saying trained people heard a difference. Every audiophile sees himself well trained even if deaf as piece of wood. All others are ignorant!

'I'm not saying it was aliens, but it was aliens!'
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-07 17:14:27
What if the studies really had such errors, can the impact on the overall result of the meta-study be controlled?
I would hope so.
Yet another issue I raised, was the validity of the studies themselves, rather than assessing them because they had the right statistical criteria. Maybe they were hearing IM. So it that what "Hi Rez" training involves?
Of course believers will believe and encourage others to "just listen and decide for themselves" (the exact opposite of the controlled studies). Then it's entirely possible for 73 year old audiophile believers to "hear" and vociferously defend the benefits of Hi-Rez without that horrific Redbook 20KHz low pass filtering, using acoustically large panel speakers like these (purple trace):
(http://cdn.soundandvision.com/images/archivesart/1200martin.3.jpg)
 ::)

cheers,

AJ
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: rrod on 2016-07-07 19:02:57
Re Type I and Type II errors: I think he's slightly misusing the terms, as they depend upon assumptions made if the models hold, whereas he seems to be addressing the possibility that modeling assumptions were violated.

As far as whether one half of the hi-res "cocktail" could be used to justify the effects of the other, I'm not so keen on that given that the theoretical mechanisms by which they should be detectable are completely different and I see nothing in the analysis that points to any kind of link/dependence. Ideally you'd use models that had something like an interaction term for sample rate x bit depth, but he didn't include anything like that. I'll work on something.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-07 19:23:09

Yet another issue I raised, was the validity of the studies themselves, rather than assessing them because they had the right statistical criteria. Maybe they were hearing IM. So it that what "Hi Rez" training involves?
 


Reiss does visit the topic of 'what was heard' in a way, when he discusses his Table 2B.   What bugs me about *that* work is that he more or less subjectively bins the potential biases into 'low risk' ,'high risk', and 'unclear'. 

That's an important choice because it affects his argument about Type II errors.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-07 19:44:22
Mixing tests for frequency and bit depth seems to me like mixing tests for two different medications.

I see your point. To better fit the situation before us, we must further specify that the medications are very strongly different from each other. For example, they might be medications that target vastly different diseases of different parts of the body, or only different diseases that are unique to  different species of animal.

The reason for this is that bit depth and sample rate are orthogonal properties of a digital signal. By orthogonal, it is meant that their implementation and effects are usually completely different and independent from each other.  They can be interchanged or made dependent, but only by means of very intentional and complex processing.  Their basic nature is to be independent of each other. One can be dramatically changed without having any effect at all on the other.

Any meta study that conflates them would seem to have a fatal flaw.



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-07 20:29:14
he thinks his discussion of Type II errors in audio testing is a new thing
I'd like to point out that "concerns" about Type II errors seem to be part of the audiophile pundit narrative. 

Audiophile pundits have an important challenge, which is to explain why the alleged sonic improvements that they repeately and loudly portray as being obvious and highly significant, are either vastly diminished or even completely disappear when good experimental controls are used.

The obvious solution is to attack the experiment.  Criticizing  bad design or execution done intelligently and honestly can be very helpful.

The problem is that many recent examples of criticism of various other's experiments seem to have serious problems of their own.   For example Table 1 of the paper "A Meta-Analysis of High Resolution Audio Perceptual Evaluation"  tries to categorize the test methodologies into categories called "ABX" "2IFC"  and "Same Different" and others.  The problem is that ABX may be  performed as 1IFC, a 2IFC and/or a Same/Different test, and that some of the tests called "ABX" were references to two different tests, one developed a Bell Labs in the early 1950s and the other developed independently in the 1970s.  The author's confusion in the area of experimental methodology is further exemplified by the following passage:

"Authors have noted that ABX tests have a high cognitive load [11], which might lead to false negatives (Type II errors). An
alternative, 1IFC Same-different tasks, was used in many tests"

But can't  ABX be used as a 1IFC Same/Different test?   The ca. 1970 ABX test was initially implemented as a 1IFC same/different test., Provisions to perform 2IFC tests were added to the ABX implementation in order to help listeners improve the accuracy of their results over the results that they were obtaining with 1IFC tests. This was a working strategy to the extent that many listeeners preferred  the 2IFC  option.  1IFC is often used to this day when listeners become highly accurate and don't need to refer to more than one sample interval per trial to obtain accurate results.

How can a test address its own shortcomings, particularly when the alleged shortcomings only came to light years if not decades after its publication?



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Speedskater on 2016-07-07 21:00:14
So what is a "1IFC" test?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-07 21:36:48
So what is a "1IFC" test?
One interval forced choice.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-07 22:02:43
So what is a "1IFC" test?

To expand on AJ's correct info, it is a test where the listener listens to one unknown, and is forced to respond that it is "A" or "B".

An early prototype of the ABX Comparator implemented this scheme. Listeners didn't like it because it depended on the listener's memory over a longer period of time.  They correctly determined that listening for small differences in sound quality over longer periods of time is more difficult test.

The next version of the ABX Comparator implemented the listener preferred 2IFC test with as many opportunities to listen to the known references and unknown X  and compare sounds as the listener felt the need for.   This minimized the need to remember sounds to as short of a time as possible as any of the known references and the unknown could be listened to in any order. Any sound could immediately follow any other, including itself.

Needless to say, I'm completely mystified by those who favor 1IFC over 2IFC, as adding 2IFC support to the ABX Comparator was highly preferred by all of the listeners due to the memory time issue.  In any case the basic listening task is Same/Different.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-08 11:08:20
The author's confusion in the area of experimental methodology is further exemplified by the following passage:

"Authors have noted that ABX tests have a high cognitive load [11], which might lead to false negatives (Type II errors). An
alternative, 1IFC Same-different tasks, was used in many tests"

Reference [11] in this snippet from Reiss' Paper is of course the paper by Jackson/Capp/Stuart that has been discussed here (https://hydrogenaud.io/index.php/topic,107124.0.html) before. Precisely the part where they mention the alleged "cognitive load" problem seems to be a thinly veiled revenge to Meyer/Moran. Some critique can also be found on the AES discussion page (https://secure.aes.org/forum/pubs/conventions/?ID=416). None of this, however, seems to have registered with Reiss.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-08 11:18:06

Reference [11] in this snippet from Reiss' Paper is of course the paper by Jackson/Capp/Stuart that has been discussed here (https://hydrogenaud.io/index.php/topic,107124.0.html) before. Precisely the part where they mention the alleged "cognitive load" problem seems to be a thinly veiled revenge to Meyer/Moran. Some critique can also be found on the AES discussion page (https://secure.aes.org/forum/pubs/conventions/?ID=416). None of this, however, seems to have registered with Reiss.

Correct. Reiss's meta study seems to fail on the grounds that the tests that were used to make up his study are not consistent with each other. IOW it's not a collection of tests of apples, but rather a conflation of tests of just about every fruit and vegetable in the store.

I continue to assert that testing the audibility of various sample rates and word lengths is actually pretty simple at this point in life, but people appear to be afraid to use good procedures because they already know that good procedures don't give them the results that they need or desire.

BTW for an example of summarizing years of subjective testing of audio gear:

Ten years of A/B/X Testing

Experience from many years of double-blind listening tests of audio equipment is summarized. The results are generally consistent with threshold estimates from psychoacoustic literature, that is, listeners often fail to prove they can hear a difference after non-controlled listening suggested that there was one. However, the fantasy of audible differences continues despite the fact of audibility thresholds.

Author: Clark, David L.
Affiliation: DLC Design, Farmington Hills, MI
AES Convention:91 (October 1991) Paper Number:3167
Publication Date:October 1, 1991 Import into BibTeX
Subject:Listening Tests
Permalink: http://www.aes.org/e-lib/browse.cfm?elib=5549
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: WernerO on 2016-07-08 11:22:58
The primary  purpose of oversampling relates to improving dynamic range. 
It is very feasible to have a digital filter that is very effective and also does not use oversampling.  IOW it operates at the identical same clock frequency as the data it processes.

No, Arny, no.

When the aim is digital anti-imaging aka reconstruction filtering oversampling is mandatory. A digital filter cannot operate above Fs/2. A digital  reconstruction filter's task is exactly to suppress everything above the original Fs/2. So Fs has to be increased before the filter can do this.

Quite how someone can survive in this hobby for decades without getting this eludes me. Do you actually know how sampling works, at all?

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-08 11:37:18
The primary  purpose of oversampling relates to improving dynamic range. 
It is very feasible to have a digital filter that is very effective and also does not use oversampling.  IOW it operates at the identical same clock frequency as the data it processes.

No, Arny, no.

When the aim is digital anti-imaging aka reconstruction filtering oversampling is mandatory. A digital filter cannot operate above Fs/2. A digital  reconstruction filter's task is exactly to suppress everything above the original Fs/2. So Fs has to be increased before the filter can do this.

Quite how someone can survive in this hobby for decades without getting this eludes me. Do you actually know how sampling works, at all?

You appear to be answering a question that you made up.  I did not mention any specific kind of filter. I was talking about the most common primary purpose for oversampling.

More specifically, I was talking about this:

https://en.wikipedia.org/wiki/Oversampling

"
Resolution
In practice, oversampling is implemented in order to achieve cheaper higher-resolution A/D and D/A conversion.[1] For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the signal-to-noise ratio at the voltage level by a factor of 16 (the square root of the number of samples averaged), effectively adding 4 bits to the resolution and producing a single sample with 24-bit resolution.[3]
"

But since you answered your own question by citing yourself as the superior authority, please provide an independent authoritative source that supports your claim that all digital filters must be oversampled and there is no other purpose for doing so.






Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-08 12:23:17
<snip>

My question is whether studies which administer two different medications, some only one, some both combined, to cure one illness, can be combined in a meta-study, and what this means for the result. I understand Reiss like this: He says that the medications work, but he doesn't say which one. That's a curious result to me. I wonder what it means. Does it mean that the medications work when administered together? Does it mean that either of the two works alone? Does it mean anything at all?

I think rrod´s answer is spot on.
If the CD - Quality (i.e. 16 Bit, 44.1 kHz) is considered to be transparent (audio perception wise) then anything "beyond CD - Quality" qualifies as "Hi-Res" , be it "more bits" or  "higher sample rate" or "more bits and higher sample rate" . Usually one would not do a meta-analysis on every possible reason at once, but as the underlying null hypothesis is as described, it is justified.
And one of the reasons why Reiss recommends strongly further research.

Quote
And another question is on my mind: Reiss tries to judge how much each individual study was subject to errors. Some studies are labelled as neutral, others as more prone to Type I errors, the remainder as prone to Type II errors. I don't see how this influenced his results, however. Can this be factored into the result somehow? Would it be wise to do so, given that this judgment will be somewhat speculative? What if the studies really had such errors, can the impact on the overall result of the meta-study be controlled?

Again i think rrod´s correct in stating that Reiss´s usage of the terms might be slightly different from the normal meaning; at least wrt to Type I Errors, as some of the concerns would normally be treated in an evaluation of test validity .
Regarding Type II Errors- it is usually one of the reasons to do a meta analysis, because by combining the results (if applicable) the stastical power will be raised. High risk of Type II Errors means low power so by doing a meta-analysis the impact of Type II Errors will be lowered overall.

Regarding Type I Errors i have to dig deeper into his material, as he wrote about some recalculation and transforming of data. At least he mentioned several methods (and used these) to control the Type I Error familywise (concerns the multiple comparison problem) but that might have been only in use for his own subgroup analysis .

Btw, you obviously missed the AES press release from 29.06.2016 ........
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-08 13:53:58
If the CD - Quality (i.e. 16 Bit, 44.1 kHz) is considered to be transparent (audio perception wise) then anything "beyond CD - Quality" qualifies as "Hi-Res" , be it "more bits" or  "higher sample rate" or "more bits and higher sample rate" . Usually one would not do a meta-analysis on every possible reason at once, but as the underlying null hypothesis is as described, it is justified.
If that would be true, it could be extended to even more factors associated with redbook CD, for example filter characteristics,  jitter levels, error rates, etc.

I'm not a statistics expert, but the notion that this is a valid and justified approach doesn't seem very convincing to me.

Quote
And one of the reasons why Reiss recommends strongly further research.
Besides being self-evident, it's also a very cheap get-out-of-jail answer in such a discussion. This recommendation didn't prevent him from publicly drawing conclusions from his research that go markedly beyond what he has shown.

Quote
Again i think rrod´s correct in stating that Reiss´s usage of the terms might be slightly different from the normal meaning; at least wrt to Type I Errors, as some of the concerns would normally be treated in an evaluation of test validity .
Well, maybe, but does this change anything? Type I errors lead to false positives, and problems with test validity are prone to have the same effect.

Take the Plenge study which I referred to earlier as an example. One might suspect from their data, that there was something special with the cauer filter, even though none of the trials reached their significance level. That's a speculation, of course, as they haven't tried to get to the bottom of this, as far as one can tell today. We can merely try to make sense of their given data. The cauer filter may have had something in its in-band behavior that allowed slightly easier detection. It could have been in-band ripple, or earlier rolloff. It is known that both can potentially be audible. That would constitute a potential for false positives, i.e. Type-I errors in Reiss' parlance.

More generally, we do have some cumulative evidence that filter characteristics, for reconstruction filters, may in some cases introduce audible effects. I'm not talking only about elusive concepts like "time smearing", "pre-ringing" or phase distortion, but mainly about ordinary stuff like in-band ripple, stopband attenuation, slope, etc.

For the purposes of Plenge et.al., they may have ignored the potential for Type-I errors, since they wouldn't have affected their interpretation of their results. If they couldn't get significance in the possible presence of Type-I errors, elimination of Type-I errors would only have gotten them further away from significance. Hence there was little incentive to investigate the reason for the slightly different result of the cauer filter.

However, if there were such errors, it would matter for the type of analysis Reiss did 35 years later. They would increase the significance levels of his analysis.

Quote
Btw, you obviously missed the AES press release from 29.06.2016 ........
I had seen it. What makes it obvious to you that I must have missed it?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-08 14:40:50
I had seen it. What makes it obvious to you that I must have missed it?
For the benefit of those who may not have: http://www.aes.org/press/?ID=362 (http://www.aes.org/press/?ID=362)
“Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. "
I asked Reiss about the dichotomy between the paper findings of "discrimination" of "unknown cause" and this specious claim about "important advantage over standard audio" found.
He said he stands by the press release and leaves it to the reader whether "discrimination" of "unknown" as stated in paper, implies "advantage over standard audio" per press release, for "Audiophiles" who want music as "close to real thing".
I think it clearly removes any facade of impartiality and purely academic curiosity.

cheers,

AJ
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-08 15:25:12
Similar press releases have been issued by the AES and by Reiss' university. I think it is fair to assume that Reiss has written both, perhaps with the help of some staff member. It is fairly rare that the AES announces a scientific paper with a press release. In this case, I assume, that the fact that Reiss is the AES vice chair of publications, had something to do with it.

It is of course also clear that press releases aren't peer reviewed, whereas the article was. This must also have had some impact on the actual wording used. I think the peer review could have been better in this case, but if Reiss had included in the paper the kind of conclusions he offered in the press release, the reviewers would have objected (I hope).
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-08 17:43:51

I had seen it. What makes it obvious to you that I must have missed it?

I think he's saying that he's so taken with Reiss's highly critically flawed article and hyped-up press release that he can't understand how anybody who read it wouldn't want to join him in bowing and scraping at the throne of high resolution as the panacea for what ails audio.

Quote
For the benefit of those who may not have: http://www.aes.org/press/?ID=362 (http://www.aes.org/press/?ID=362)
“Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. "

I must have missed the preference testing. What I saw was very weak evidence that on occasional sightings of a blue moon, people
kinda mighta heard a weak difference that might actually be the result of choosing p = 0.05 as his critiera for statistical significance.

Quote
I asked Reiss about the dichotomy between the paper findings of "discrimination" of "unknown cause" and this specious claim about "important advantage over standard audio" found.

He said he stands by the press release and leaves it to the reader whether "discrimination" of "unknown" as stated in paper, implies "advantage over standard audio" per press release, for "Audiophiles" who want music as "close to real thing".

I think it clearly removes any facade of impartiality and purely academic curiosity.

Agreed. Reality is that there have been at least  4 failed attempts to go mainstream with some kind of high resolution audio. All failed in the mainstream marketplace, probably with adverse financial and professional consequences.

(1) HDCD  " as developed and promoted by Prof." Keith O. Johnson and Michael "Pflash" Pflaumer of Pacific Microsonics Inc.

(2) HDCD as promoted by Microsoft, web site discontinued in 2005.

(3) DVD-A

(4) SACD

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-09 14:28:06
<snip>
If that would be true, it could be extended to even more factors associated with redbook CD, for example filter characteristics,  jitter levels, error rates, etc.

I think you miss the argument which is related to the concept of internal validity.
If an experiment tries to examine a difference in audibility between various formats, the independent variable has to be the format itself, every other effects are confounders that have to be blocked out or, if impossible, randomised.

So, every technically weak point that is related to the format counts, everything else does not.

Quote
I'm not a statistics expert, but the notion that this is a valid and justified approach doesn't seem very convincing to me.

I hope the explanation above helps a bit and it is basically not a question of statistics.
In which way the data should be transformed, what to do in the case of heterogenous data, which underliying model to use for choosing test statistics and how to correct for the multiple comparison problem, that are statistical questions.

Quote
Besides being self-evident, it's also a very cheap get-out-of-jail answer in such a discussion.

"cheap get-out-of-jail answer" might be a suitable phrase for forums, but in science? What else should an author do if a meta-analysis or systematic review did not find overwhelming evidence for a hypothesis?
Last time i looked it up, the percentage of cochrane meta-analysis (systemativ reviews) where authors ends with the recommendation for further research was roughly 50%.
Given the unwillingness in the scientific communitiy to replicate experiments and (even more important) to publish the results of replications it is no surprise.
I´d rather be surprised if just the audio field were an exemption.


Quote
This recommendation didn't prevent him from publicly drawing conclusions from his research that go markedly beyond what he has shown.

I haven´t finished my analysis of Reiss´s paper yet, so can´t at the moment judge if your assertion is correct. At a glance i think "markedly beyond" is exaggerated.

Quote
Well, maybe, but does this change anything? Type I errors lead to false positives, and problems with test validity are prone to have the same effect.

Both may favour getting wrong results or draw wrong conclusions, but nevertheless it is better not to confuse these things. Internal validity means an experiment/tests measures the effect it is intended to measure and if that part is flawed, statistics can´t correct.
Otoh if the statistical analysis is flawed or the test was underpowered, it is nothing where technics could help..

Quote
More generally, we do have some cumulative evidence that filter characteristics, for reconstruction filters, may in some cases introduce audible effects. I'm not talking only about elusive concepts like "time smearing", "pre-ringing" or phase distortion, but mainly about ordinary stuff like in-band ripple, stopband attenuation, slope, etc.

Beside that "in-band ripple" in the frequency domain is related to "pre-ringing" in the time domain, if it is not directly related to the format under test (means unavoidable within the limits of that format) then it should be treated as a confounder.

Quote
<snip>
However, if there were such errors, it would matter for the type of analysis Reiss did 35 years later. They would increase the significance levels of his analysis.

It depends, but i agree that experiments like plenge´s (like every other that uses nonmusic stimuli) can´t really support a conclusion about preferences while listening to music delivered in high res.
But, as said before, i haven´t finished my analysis yet so can´t say which part of Dr. Reiss´s conclusion is backed up by the data.

Quote
Quote
Btw, you obviously missed the AES press release from 29.06.2016 ........
I had seen it. What makes it obvious to you that I must have missed it?

A sentence from your german blog:
Quote
Die AES hat meines Wissens auch keine Presserklärung darüber herausgegeben.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-09 16:47:21
I think you miss the argument which is related to the concept of internal validity.
If an experiment tries to examine a difference in audibility between various formats, the independent variable has to be the format itself, every other effects are confounders that have to be blocked out or, if impossible, randomised.

So, every technically weak point that is related to the format counts, everything else does not.
Well, this applies to an individual study. For a meta-analysis, I would expect the "independent variable" to be the same for each input study. If it isn't, I don't know anymore what the result actually says, nor whether the combining of the individual results makes sense. That may be my fault, and perhaps someone manages to enlighten me here.

On top of that, it seems to me that following your argument would mean that some studies would have to be excluded from the meta-study. For example in the case of Jackson [11], it appears that their choice of filter characteristics and dithering method may well be the main factor in their result. So if you think those factors aren't part of the independent variable, which I would concur with, then there must be doubts regarding the internal validity of this test.

A similar argument can be made for other tests. In some cases there is a distinct possibility that the "other effects" you referred to may in fact not have been blocked out sufficiently well. Such other effects could be intermodulation, or artefacts of the filters, amongst other things. As I pointed out already, such potential problems may not have rendered the original test invalid in the sense that its result needs to be dismissed, because the test's own conclusion may not have been endangered by such an effect. However its use in a meta-study is a different case, where such errors can play a different role.

To make this somewhat theoretical argument a bit clearer, consider this thought experiment with 3 hypothetical studies:
You would probably agree that all three studies had a flaw. In each case the flaw increased the likelihood of false positives, i.e. there was a risk that the test concluded wrongly that high-res was audibly different from standard resolution. However, in none of the tests the effect was strong enough to make it cross the significance line, so it didn't change the test conclusion. All tests concluded that the null hypothesis couldn't be rejected. Hence there was no need and no incentive to investigate whether there had been any flaws in the test that could have led to false positives.

Now let's do a meta analysis of the three tests combined. Lets suppose that the increased statistical strength obtained by the combined results now causes the significance line to be crossed. The conclusion would have to be that the null hypothesis can be rejected, i.e. one would have reason to believe that the subjects really could hear a difference between hi-res and standard res.

In a sense this result is correct, because in each case there was a factor that caused a just barely audible difference. If each of the tests had been done with more trials, each individual test might well have crossed the significance line by itself.

However, the conclusion would be wrong, because it would have been the result of the flaws in the tests. In each case the subjects' ability to distinguish the stimuli was because of secondary effects that compromised the test's internal validity.

So even though the individual tests reached the right conclusion, because the error wasn't strong enough to tip the balance, the meta analysis results in a wrong result.

Quote
"cheap get-out-of-jail answer" might be a suitable phrase for forums, but in science? What else should an author do if a meta-analysis or systematic review did not find overwhelming evidence for a hypothesis?
That's why I directed the argument at you. Reiss is of couse entitled to recommend further research. I just would have wished he didn't oversell his own results.

Quote
Beside that "in-band ripple" in the frequency domain is related to "pre-ringing" in the time domain, if it is not directly related to the format under test (means unavoidable within the limits of that format) then it should be treated as a confounder.
I fully agree. I think the studies used by Reiss should be put under some scrutiny regarding this possibility.

Quote
Quote
What makes it obvious to you that I must have missed it?
A sentence from your german blog:
Quote
Die AES hat meines Wissens auch keine Presserklärung darüber herausgegeben.
That was my state of knowledge at the point in time when I wrote it. Shortly afterwards, but well before our exchange here, I became aware of the AES press release and duly amended my blog post with an update, which you appear to have missed. I chose to correct it in the form of an update and leave the original text there, for transparency.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-09 17:42:32
Paper:
In summary, these results imply that, though the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and playback chain "is affected" by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown, this perceived effect can be confirmed with a variety of statistical approaches and it can be greatly improved through training.

PR:
said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content."

At a glance i think "markedly beyond" is exaggerated.
Bullshit. Nothing in the papers mined statistics of dubious results, suggests any "advantage". That's pure believer wishful thinking, at best.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Thad E Ginathom on 2016-07-09 19:28:32
As a layman with no knowledge of statistics and little of scientific method, would I be far off in understanding this meta-analysis as amounting to something like...

have nothing to new to offer,  for some reasons or none, one might as well gather together and quote other work on what people call High-Resolution Audio. None of it conclusively shows any advantages, except, perhaps to certain rare beings with unusual hearing ability, but hey, lets go on talking about it.

If the aim was to keep people talking about it, it seems to have been effective.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-09 20:07:44
See, the paper reached its goal and makes people accept now that some people have unusual abbilities.
Audiophiles have unusual abbilities and have many wordings for what they hear, others are ignorant.
The abbility is nowhere explained at a level i understand. It may only be these people catch some distortion in the audible band only present with high samplerates.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-09 23:02:31
Quote
Besides being self-evident, it's also a very cheap get-out-of-jail answer in such a discussion.
"cheap get-out-of-jail answer" might be a suitable phrase for forums, but in science?
"Science"?

Even catagorizing this propaganda as bad science would be overly-charitable.

Quote
At a glance i think "markedly beyond" is exaggerated.
Funny you should use the word exaggerated, and by funny, I really mean stupid.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-10 05:18:04
As a layman with no knowledge of statistics and little of scientific method, would I be far off in understanding this meta-analysis as amounting to something like...

have nothing to new to offer,  for some reasons or none, one might as well gather together and quote other work on what people call High-Resolution Audio. None of it conclusively shows any advantages, except, perhaps to certain rare beings with unusual hearing ability, but hey, lets go on talking about it.

If the aim was to keep people talking about it, it seems to have been effective.

Archimago expressed (http://archimago.blogspot.com/2016/07/musings-digital-interpolation-filters.html#more)my feelings almost  exactly (bolding mine)

Quote
Seriously folks, if we're trying to decide whether a high-res album sounds different from a CD 16/44 (of the same mastering of course), it should not need a meta-analysis. As a consumer, I can go on HDTracks this morning and see that a 24/192 version of Eric Clapton's recent album I Still Do costs US$27.98. And the CD on Amazon is US$10.90. It looks like both the CD and download are from the same DR11 master. The question for me in considering the purchase is not whether they may sound different, but rather does this difference justify a 250% markup!? In this context, does a 52.3% accuracy rate in a research setting sound like a valuable proposition to grab the high-resolution version?

You know guys, the fact that we're even going through the contortions of complex statistical analysis after >15 years since the release of SACD and DVD-A clearly indicates that those who claim to hear "obvious" differences are plainly wrong. When a meta-analysis is used in science to gather data far and wide to find and declare statistical significance of this kind of tiny magnitude, it just means that the "signal to noise" ratio is poor and that the magnitude of the effect is obviously academic. The author stated just as much: "In summary, these results imply that, though the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented levels." Notice the careful wording... In no way does it imply that these "small" and "difficult to detect" differences are necessarily "better" as audiophiles always desire to promote. I like this wording and think Dr. Reiss did a fantastic job putting this together. By the way, these results are of no surprise as we've been talking about this for years!
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Thad E Ginathom on 2016-07-10 10:46:27
I'll take that as a yes! Thanks!

See, the paper reached its goal and makes people accept now that some people have unusual abbilities.
Audiophiles have unusual abbilities and have many wordings for what they hear, others are ignorant.
The abbility is nowhere explained at a level i understand. It may only be these people catch some distortion in the audible band only present with high samplerates.

I was not equating rare beings with audiophile. Audiophiles may be rarer than the makers of expensive kit and the publishers of over-priced music formats would like them to be, but they are not rare enough!
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-10 11:36:16
Archimago expressed (http://archimago.blogspot.com/2016/07/musings-digital-interpolation-filters.html#more)my feelings almost  exactly (bolding mine)

Quote
I like this wording and think Dr. Reiss did a fantastic job putting this together. By the way, these results are of no surprise as we've been talking about this for years!
Archimago should perhaps have added that the wording he likes is the one in the paper. I'm quite sure he wouldn't approve of the wording in the press release.

It is of course the press release that will have the greatest impact with the public. A cursory search of the internet reveals how many media outlets have promptly picked this up. It doesn't look as if many of them had bothered to read the actual paper beyond the abstract, let alone put it under scrutiny. The almost universal message is that the paper supports the claims of audiophiles and the respective manufacturers. This applies even to the media that aren't associated with the audiophile sector.

Seen from this angle, the paper merely acts as a pretext for the propaganda that gets disseminated via the media. The paper's actual content isn't particularly important, as long as it appears to point in the "right" direction. Reiss obviously knows how scientific reporting works, and how the audiophile scene works, and uses this knowledge to further the audiophile agenda. This is what I meant when I wrote earlier, that the audiophile scene has learnt how to work the system.

Given what Archimago quite rightly pointed out, it should come as no surprise that the audiophile scene craves desperately for scientific confirmation of their daydreams. The fact that this is brewing for 20 years with no clear result, is an embarassment of the highest order when you try to build a mass market on it. No wonder there are attempts like those whe are discussing here.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-10 15:57:28
Quote from: Archimago
Seriously folks, if we're trying to decide whether a high-res album sounds different from a CD 16/44 (of the same mastering of course), it should not need a meta-analysis. As a consumer, I can go on HDTracks this morning and see that a 24/192 version of Eric Clapton's recent album I Still Do costs US$27.98. And the CD on Amazon is US$10.90. It looks like both the CD and download are from the same DR11 master. The question for me in considering the purchase is not whether they may sound different, but rather does this difference justify a 250% markup!? In this context, does a 52.3% accuracy rate in a research setting sound like a valuable proposition to grab the high-resolution version?
Quote from: Wombat
I can imagine if the CD was done with best quality in mind it may be possible to sound even better as the HiBitrate download because of some watermark used for UMG related labels and online distribution.
The Clapton pretty surely has that watermark.

I was not equating rare beings with audiophile. Audiophiles may be rarer than the makers of expensive kit and the publishers of over-priced music formats would like them to be, but they are not rare enough!
Audiophiles that read the Reiss paper will feel rare enough :)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-10 17:45:45
Audiophiles that read the Reiss paper will feel rare enough :)
This (https://hydrogenaud.io/index.php/topic,112204.msg924909.html#msg924909) was not a hypothetical example.
From a self assessed "objective" audiophile, who finds Hi Re$ "based on something real, not snake oil, placebo, self delusion, etc." and "listens" "as carefully as possible".
No kidding. ::)

cheers,

AJ
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-11 09:16:57
<snip>
Well, this applies to an individual study. For a meta-analysis, I would expect the "independent variable" to be the same for each input study. If it isn't, I don't know anymore what the result actually says, nor whether the combining of the individual results makes sense. That may be my fault, and perhaps someone manages to enlighten me here.

As stated before, in some cases combining results in the way Reiss did, is justified. If the CD-format is really transparent, it doesn´t matter if there are more bits or higher sampling frequency or both, because more than "transparent" is not possible. If there is nevertheless an effect then it is an useful result. Why should you expect that a meta-analysis has to deliver an answer to any further question?
As said before, it is quite common that authors of meta-analysis (or systematic reviews) do strongly recommend further research.

Quote
<snip>
However, the conclusion would be wrong, because it would have been the result of the flaws in the tests. In each case the subjects' ability to distinguish the stimuli was because of secondary effects that compromised the test's internal validity.

So even though the individual tests reached the right conclusion, because the error wasn't strong enough to tip the balance, the meta analysis results in a wrong result.

For all these reasons (to adress the question you have to (re)read all papers) i still haven´t finished my analysis of Dr. Reiss´s meta-analysis and i am wondering that so many others that obviously haven´t done what is needed (most likely considering the contents of their posts) draw "categorical conclusions" .

Quote
That's why I directed the argument at you. Reiss is of couse entitled to recommend further research. I just would have wished he didn't oversell his own results.

I am a bit puzzled at this point; if you didn´t reanalyzed the papers used and didn´t do the statistics to see which effect inclusion or exclusion of various papers might have, how do you know that he "oversells his results" ?

Quote
That was my state of knowledge at the point in time when I wrote it.

That why i wrote you "missed" . You could and should have known better.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-11 10:04:31
Archimago expressed (http://archimago.blogspot.com/2016/07/musings-digital-interpolation-filters.html#more)my feelings almost  exactly (bolding mine)

That raises imho the interesting question which percentage/mark up relation one should consider to be "justifying" . Would "75% "qualify or is "100%" needed? ;)

I´d recommend, that everyone should listen for himself to see, if "hi- res" is useful and any mark-up justified.
Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself.

Quote
It is of course the press release that will have the greatest impact with the public. A cursory search of the internet reveals how many media outlets have promptly picked this up. It doesn't look as if many of them had bothered to read the actual paper beyond the abstract, let alone put it under scrutiny. The almost universal message is that the paper supports the claims of audiophiles and the respective manufacturers. This applies even to the media that aren't associated with the audiophile sector.

Thats the way it is. And it was the same when Meyer/Moran came up with their publication, it was just the other camp of believers.

Everybody adressing criticise to Dr. Reiss´s meta-analysis should reread his comments on Meyer/Moran and ask himself if he was nearly as critical back then. And the Meyer/Moran was really seriously flawed and as said back then, after reading it, i didn´t understand which way it could pass the peer review process at the JAES.
As stated before, their hypothesis might nevertheless be true, but the validity of their study was questionable to a degree where no further conclusions are warranted.

Quote
Given what Archimago quite rightly pointed out, it should come as no surprise that the audiophile scene craves desperately for scientific confirmation of their daydreams. The fact that this is brewing for 20 years with no clear result, is an embarassment of the highest order when you try to build a mass market on it. No wonder there are attempts like those whe are discussing here.

I don´t see that the "mass market" is really influenced by anything like this. Audiophiles are just a very small subgroup....
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 12:29:41
<snip>
Well, this applies to an individual study. For a meta-analysis, I would expect the "independent variable" to be the same for each input study. If it isn't, I don't know anymore what the result actually says, nor whether the combining of the individual results makes sense. That may be my fault, and perhaps someone manages to enlighten me here.

As stated before, in some cases combining results in the way Reiss did, is justified. If the CD-format is really transparent,

Repeating blind support for bad science doesn't make bad science right or even less wrong.

There's no way that conflating orthogonal and contradictory parameters as Reiss did is justified,

Furthermore, you apparently have unilaterally decided to change the topic, because the subject paper does not deal with the question of whether or not the CD format is really transparent.

Please let me remind you that its title is "A Meta-Analysis of High Resolution Audio Perceptual Evaluation" Whether or not High Resolution Audio (whatever that is) is transparent was not covered, either.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-11 13:06:46
Please reread my posts, as i did address that point already. What was meant by "combining" is, that combining studies related to "more bits" or "higher sampling rate" or "more bits and higher sampling rate" is justified under the constraints already explained.



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 13:42:27
Please reread my posts, as i did address that point already.

Hence my comment about repeating false claims not making t bad science right or even less wrong. 

Quote
What was meant by "combining" is, that combining studies related to "more bits" or "higher sampling rate" or "more bits and higher sampling rate" is justified under the constraints already explained.

So some keep repeating again and again.  Looks to me like phases like "higher sampling rate" and "more bits" are just catch phrases with no actual physical meaning to many people,

One thing is clear - some people know so little about evaluating the transparency of a medium or format that they can't detect the absence of it in descriptions of tests or even just the title of a paper.





Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-11 14:24:27
I´d say, a bit less drama and provision of more arguments instead could help. ;)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 14:35:42
I´d say, a bit less drama and provision of more arguments instead could help. ;)
I´d say, a bit less drama and provision of more arguments instead could help. ;)

There's no incentive to do the work required to form additional arguments when the ones that have been provided are dismissed out of hand.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-11 14:53:01
I totally agree..... ;)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 16:06:25
I totally agree..... ;)

Yes, your posts repeated;y state their agreement with the Riess anti-science, pro placebophlie PR campaign, but add no supporting  or clarifying arguments to the weak ones that may exist or are obviously missing from the Reiss paper.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-11 16:10:52
Did you really expect anything more from this intellectually dishonest placebophile apologist?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 16:40:34
Did you really expect anything more from this intellectually dishonest placebophile apologist?

Good point. One of his more interesting tells is in this statement:

"Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself."

The obvious implication is that the author has not evaluated his own ability to differentiate between "cd-format" and "hi-res" , because if he had, he would have written that he had properly evaluated the difference himself as justificaiton for some course of action.

For the record, I have done dozens of DBTs and evaluated my own ability to differentiate between "cd-format" and "hi-res". It was null and it has been null for all the persons that I have seen testing it in the same way.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-11 16:44:11
If the CD-format is really transparent..
Which of the selected tests were for that question Jakob2 ?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-11 16:46:23
I'm not saying it is audible, but it is audible.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-11 17:37:37
As stated before, in some cases combining results in the way Reiss did, is justified. If the CD-format is really transparent, it doesn´t matter if there are more bits or higher sampling frequency or both, because more than "transparent" is not possible. If there is nevertheless an effect then it is an useful result.
And what exactly does that result mean?

If the research question was to find whether the CD format was transparent, then several of the studies used by Reiss should have been excluded, because they used test conditions (filters, dithers, etc.) which are not inherent nor typical in the CD format, and which may have had audible consequences with the potential for false positives. But even if this hadn't been a problem, a positive result would only have meant that in some circumstances, the CD format wasn't completely transparent. It still wouldn't be clear whether those circumstances have any relevance for the listening experience encountered in practice, nor would it be clear whether any of the HiRes formats would be an improvement. In other words, even in the most optimistic case, assuming no ill effects from the base studies, it would be inappropriate to draw any other conclusion than that the CD format isn't transparent in all possible situations.

In this form this conclusion wouldn't be a surprise, either. Meyer/Moran, for example, have noted in their study, that the noise floor of the CD format becomes noticeable at very high playback volume settings. It wouldn't have taken a Reiss meta study to come to the conclusion that you can construe circumstances that allow the limitations of the CD system to be heard. If that had been the question, the Reiss study would have been superfluous. On top of that, even the result given in the paper would have been an overinterpretation, let alone the press release.

Quote
Why should you expect that a meta-analysis has to deliver an answer to any further question?
In the case we have here, the study is useless if the answer is limited to what you posit.

Quote
As said before, it is quite common that authors of meta-analysis (or systematic reviews) do strongly recommend further research.
Well, in this (your) interpretation all of the work is still ahead of him. The meta study didn't even help with identifying where further research is likely to be most profitable.

Quote
I am a bit puzzled at this point; if you didn´t reanalyzed the papers used and didn´t do the statistics to see which effect inclusion or exclusion of various papers might have, how do you know that he "oversells his results" ?
This reveals itself quite readily by just comparing the results stated in the paper with the conclusions offered in the press release. No detailed investigation is needed for something that obvious.

Quote
That why i wrote you "missed" . You could and should have known better.
I'm not prepared to take a lecture from you in this regard.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-11 18:59:15
For the record, I have done dozens of DBTs and evaluated my own ability to differentiate between "cd-format" and "hi-res". It was null and it has been null for all the persons that I have seen testing it in the same way.

Hell, considering that most people have trouble differentiating between reasonable-bitrate VBR LAME MP3 and a CD-quality lossless source, or even a "hi-res" one, I simply don't understand why people can seriously expect to hear a difference between CD quality and "hi-res".

It's completely incongruous.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-11 19:13:40
The ringing my friend, the ringing! Luckily the cure for ringiphobia (fear of ringing) is soon awaiting FDA approval in the form of HiRes, dsd or MQA.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-11 19:26:27
Don't forget the jitter! Because apparently hi-res, DSD and MQA somehow lessen jitter as well.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-11 20:01:25
For the record, I have done dozens of DBTs and evaluated my own ability to differentiate between "cd-format" and "hi-res". It was null and it has been null for all the persons that I have seen testing it in the same way.

Hell, considering that most people have trouble differentiating between reasonable-bitrate VBR LAME MP3 and a CD-quality lossless source, or even a "hi-res" one, I simply don't understand why people can seriously expect to hear a difference between CD quality and "hi-res".

Its all about science, which includes giving one heck of a try to experiments whose outcome you may strongly expect to be null.

Besides, tests like these can be among the easiest of all DBTs to do, even by yourself. That is,  unless you get caught up in making your own hi rez recordings from scratch, which I did as well.

At the time I didn't trust the commercial so-called hi-rez products, and thus avoided the conundrum that Meyer and Moran found themselves in 5 years later, when they made  the mistake of believing the claims of a fraudulent segment of the audio industry.

The storm of undeserved abuse that they've taken from golden ears including Reiss shows us that blaming the victim is not at all beneath them.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-11 20:39:29
I think the M&M paper made a very interesting unintentional point regarding the provenance of most so-called hi-res recordings, something that is far too often ignored in all the victim blaming.

So far, the only hi-res proponent I have the slightest bit of admiration for is Dr. Mark Waldrep, simply because he doesn't believe in voodoo and tries to debunk as much industry bullshit as he can, and because he is completely honest about not being able to hear a difference, so his approach to hi-res seems to basically be "better safe than sorry, let's record as much of the sound as possible, even the stuff we can't hear, we'll figure out if it actually matters later."
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 00:08:17
I think the M&M paper made a very interesting unintentional point regarding the provenance of most so-called hi-res recordings, something that is far too often ignored in all the victim blaming.

It is obvious that there was and may still be a common practice of selling recordings with reduced resolution as high resolution music and files. This seems to be deceptive and fraudulent. To the best of  my knowledge there are a goodly number of high resolution advocates and promoters who are familiar with this, but to the best of my knowlege they have shown no interest in holding any of the guilty parties responsible. Note that this practice was highly adverse to their stated goal of promoting high resolution audio.

Quote
So far, the only hi-res proponent I have the slightest bit of admiration for is Dr. Mark Waldrep, simply because he doesn't believe in voodoo and tries to debunk as much industry bullshit as he can, and because he is completely honest about not being able to hear a difference, so his approach to hi-res seems to basically be "better safe than sorry, let's record as much of the sound as possible, even the stuff we can't hear, we'll figure out if it actually matters later."

In late 2014 Dr, Wakldrip perpared some files for people to use to demonstate the benefits of high resolution audio to themselves.  This post on the AVS forum was part of the promotion of this effort:  http://www.avsforum.com/forum/91-audio-theory-setup-chat/1598417-avs-aix-high-resolution-audio-test-take-2-a.html#post25638361 (http://www.avsforum.com/forum/91-audio-theory-setup-chat/1598417-avs-aix-high-resolution-audio-test-take-2-a.html#post25638361)

The post linked admits that the first generation of these files that were distributed at Dr. Waldrip's request and with his cooperation contained an audible "tell" in the form of a level mismatch that had nothing to do with high resolution audio.

The posts indicates that there was a second generation file correcting this problem, which I can confirm. What I don't see is any discussion of the second "tell" in the form of a variable channel timing mismatch that corrupted these second generation files.

AFAIK Dr. Waldrip was made aware of this problem in wirtign but I know of no action that was taken to correct the situation.

I personally ABXed some of these files and found that the "tell" was indeed audible.  Again, problems like these run counter to the desires of any sincere advocate of high resolution audio by making any positive results obtained by listening to these files possibly even likely due to the audible timing error.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-12 08:29:28
Well darn, and here I thought he was one of the sole lights in the muck of audiophilia :-(
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-12 08:42:17
That raises imho the interesting question which percentage/mark up relation one should consider to be "justifying" . Would "75% "qualify or is "100%" needed? ;)
Isn't that the same question that Archimago asked, just put in other terms?

Quote
I´d recommend, that everyone should listen for himself to see, if "hi- res" is useful and any mark-up justified.
Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself.
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Quote
Thats the way it is. And it was the same when Meyer/Moran came up with their publication, it was just the other camp of believers.
The similarity you try to suggest doesn't go very far. Perhaps you should do some research first, did M&M (or AES) issue a press release? Did they try to convey a different interpretation of their result than what they had written in their paper?

The media are of course what they are. Their simplifications go either way. The crucial point here is how the author contributes to this. Reiss differs rather a lot from M&M in this regard.

Quote
Everybody adressing criticise to Dr. Reiss´s meta-analysis should reread his comments on Meyer/Moran and ask himself if he was nearly as critical back then. And the Meyer/Moran was really seriously flawed and as said back then, after reading it, i didn´t understand which way it could pass the peer review process at the JAES.
As stated before, their hypothesis might nevertheless be true, but the validity of their study was questionable to a degree where no further conclusions are warranted.
I think the criticism against M&M is to a large extent unfair. While they certainly haven't produced a study that's beyond reproach, they have shown quite convincingly and conclusively what they set out to show: That the audiophile claims regarding the inadequacy of the CD format were not true. Their point wasn't a technical one about the CD format, but a check on the credibility of audiophile claims. As such, this result still stands and IMHO will continue to stand.

Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

Quote
I don´t see that the "mass market" is really influenced by anything like this. Audiophiles are just a very small subgroup....
Can it have escaped you that HiRes is being introduced to the mass market right now? The marketing activities to this end are substantial. Perhaps you should leave your audiophile sandbox and look around.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-12 08:52:58
I´d recommend, that everyone should listen for himself to see, if "hi- res" is useful and any mark-up justified.
Even if other people somewhere,somehow were able to differentiate between "cd-format" and "hi-res" to 100% i wouldn´t buy "hi-res" material without evaluating it myself.
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Nevermind the sad fact that the masters used for the "hi-res" versions are often tweaked compared to the masters used for CDs, giving an audible difference that people will attribute to the format. They could use the exact same master for both versions, but often they don't, either due to incompetence or malicious intent.

The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance. Not very many people have the skills or inclination to do this.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 10:11:01
Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

All correct, and a situation that raises critical questions about Reiss' scholarship, his familiarity with the topic, how his  own biases impacted his study,, and intellectual honesty. 

It is a matter of fact that two significantly different audio listening test paradigms share the name ABX. This is covered in an AES public document linked here: http://www.aes.org/forum/?ID=416&c=2927 (http://www.aes.org/forum/?ID=416&c=2927).  dated February, 2015.   One clue to his lack of current scholarship is the presence of a reference to the Jackson paper in his footnotes but apparent ignorance of the critical comments that were made about it on the AES forum a year of more before he published this paper.

The issue was further discussed by Stefan Heinzmann in Dec 2014 in the same paper comments as were previously linked:

"The criticism of the ABX test procedure that is offered in the introduction is poorly justified. The "cognitive load", as called by the authors, is entirely under the control of the listener in an ABX test, since the listener selects when to switch and what to switch to. There is no requirement to keep all three sounds in memory simultaneously, as criticised by the authors. Consequently, it is unclear what advantage the method chosen by the authors offers over an ABX test. Furthermore, the informal use of the term "cognitive load" seems to suggest tacitly, that a higher "load" is detrimental to the ability to distinguish between different sounds. I'm not aware of any study that confirms that. Indeed, one could just as easily suspect the opposite, namely that the availability of more sounds would increase this ability. Neither of those suggestions can of course be taken for granted. The authors shouldn't appeal to their interpretation of common sense when criticising a test method, and rely on testable evidence instead."

In March 2015 Amir Majidimehr wrote:

"Consequently, it is unclear what advantage the method chosen by the authors offers over an ABX test. "

So we have Reiss presenting what amounts to be negative rumors and speculation about a test procedure that was widely used in the studies he chose, without balancing them with other well known comments that would lend much needed objectivity and accuracy to his paper.

He shows similar bias in his paper here:

"3.3 How Does Duration of Stimuli and Intervals
Affect Results?

The International Telecommunication Union recommends
that sound samples used for sound quality comparison
should not last longer than 15–20 s, and intervals
between sound samples should be up to 1.5 s [78], partly
because of limitations in short-term memory of test subjects.
However, the extensive research into brain response
to high resolution content suggests that exposure to high
frequency content may evoke a response that is both lagged
and persistent for tens of seconds, e.g., [22, 48]. This implies
that effective testing of high resolution audio discrimination
should use much longer samples and intervals than the
ITU recommendation implies.
Unfortunately, statistical analysis of the effect of duration
of stimuli and intervals is difficult. Of the 18 studies suitable
for meta-analysis, only 12 provide information about
sample duration and 6 provide information about interval
duration, and many other factors may have affected the
outcomes. In addition, many experiments allowed test subjects
to listen for as long as they wished, thus making these
estimates very rough approximations.
Nevertheless, strong results were reported in Theiss
1997, Kaneta 2013A, Kanetada 2013B and Mizumachi
2015, which all had long intervals between stimuli. In
contrast, Muraoka 1981 and Pras 2010 had far weaker results
with short duration stimuli. Furthermore, Hamasaki
2004 reported statistically significant stronger results when
longer stimuli were used, even though participant and stimuli
selection had more stringent criteria for the trials with
shorter stimuli. This is highly suggestive that duration of
stimuli and intervals may be an important factor.
A subgroup analysis was performed, dividing between
those studies with stated long duration stimuli and/or long
intervals (30 seconds or more) and those that state only
short duration stimuli and/or short intervals. The Hamasaki
2004 experiment was divided into the two subgroups based
on stimuli duration of either 85–120 s or approx. 20 s
[62, 64].
The subgroup with long duration stimuli reported 57%
correct discrimination, whereas the short duration subgroup
reported a mean difference of 52%. Though the distinction
between these two groups was far less strong than when
considering training, the subgroup differences were still
significant at a 95% level, p = 0.04. This subgroup test also
has a small number of studies (14), and many studies in the
long duration subgroup also involved training, so one can
only say that it is suggestive that long durations for stimuli
and intervals may be preferred for discrimination.
[/quote]

As is pointed out by other paper comments comments, since ABX 1982 puts the switching control in the hands of the listener, issues related to length and order of presentation are under his control and he is free to experiment with them to obtain his best possible results. IOW, its a non-issue that he makes into an apparent big problem.

3.4 Effect of Test Methodology
There is considerable debate regarding preferred methodologies
for high resolution audio perceptual evaluation. Authors
have noted that ABX tests have a high cognitive load
[11], which might lead to false negatives (Type II errors). An
alternative, 1IFC Same-different tasks, was used in many
tests. In these situations, subjects are presented with a pair
of stimuli on each trial, with half the trials containing a pair
that is the same and the other half with a pair that is different.
Subjects must decide whether the pair represents the same
or different stimuli. This test is known to be “particularly
prone to the effects of bias [79].” A test subject may have a
tendency towards one answer, and this tendency may even
be prevalent among subjects. In particular, a subtle difference
may be perceived but still identified as ‘same,” biasing
this approach towards false negatives as well.
We performed subgroup tests to evaluate whether there
are significant differences between those studies where subjects
performed a 1 interval forced choice “same/different”
test, and those where subjects had to choose among two alternatives
(ABX, AXY, or XY “preference” or “quality”).
For same/different tests, heterogeneity test gave I
2 = 67%
and p = 0.003, whereas I
2 = 43% and p = 0.08 for ABX
and variants, thus suggesting that both subgroups contain
diverse sets of studies (note that this test has low power,
and so more importance is given to the I
2 value than the p
value, and typically, α is set to 0.1 [77]).

A slightly higher overall effect was found for ABX, 0.05
compared to 0.02, but with confidence intervals overlapping
those of the 1IFC “same/different” subgroup. If methodology
has an effect, it is likely overshadowed by other differences
between studies."

First off there is this very serious problem that Reiss has allowed to persist, which is that we don't know which ABX test he is talking about when he mentions "ABX".  It's not clear that he is aware that there are two different tests that are commonly referred to by the same name. This is a critical failing in someone who seeks to summarize a number of different listening tests that used or referred to these two different listening test methodologies.

Secondly, Reiss does not seem to understand that the 1985 ABX test is often used as a 1IFC test, which is mentioned by Amir Majidimehr in his  paper comments from  June 8:

" The first step in improving my results was ignoring Y.  Likewise the next improvement came from exactly the method used in this paper which was playing A, and playing X and immediately voting one way or the other.  Even as a trained listener, eliminating extra choices was critical for me to generate reliable results."

Thirdly, Reiss seems to assume that there was no listener training at all unless it is specifically mentioned,  IME this is highly improbable because at least some training has IME always been required for a listener to become productive at all.




Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 11:19:15
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-12 12:27:35
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2
That exactly is the way the Fitzceraldos these days argue. As i mentioned before 'I'm not saying it is audible, but it is audible'
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 12:39:59
I´d recommend, that everyone should "listen" for himself to see, if "hi- res" is useful
Of course you would Jakob2

If
Quote
"...everyone should "listen" for himself to see, if "hi- res" is useful
meant doing proper listening tests, I would of course agree.  However, not being born yesterday I strongly suspect what is meant is a sighted evaluation, which is worse than useless.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-12 12:43:01
'I'm not saying it is audible, but it is audible'
Or, slightly more sophisticated: 'You may very well say it is audible, but I couldn't possibly comment.'
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 12:54:44
The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance.

Agreed.

Quote
Not very many people have the skills or inclination to do this.

The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high school student.

Having the inclination is a different thing, because one might find  out some uncomfortable truths.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-12 13:07:48
Not very many people have the skills or inclination to do this.

The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high student.

Having the inclination is a different thing, because one might find  out some uncomfortable truths.

Nevertheless, most people will consider it far too complicated to even consider trying. And I'm not talking about the general population here, I am talking about most music lovers as well.

Most people just want to "press button, get music".

And quite often this takes the form of a streaming service, so they are even further removed from file formats and compression and so on. They'll be happy with Spotify or Apple Music, or they'll consider themselves "crafty consumers" and subscribe to services like Tidal or WIMP Hifi, that offer lossless streaming, and MQA at some point, because they heard it's "better".
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: bobbaker on 2016-07-12 13:08:33
The skills and tools involve using some well known shareware such as Sox, which should not tax the abilities of a moderately bright high student.
Interestingly, and against intuition, I have found the less bright students need less pot, while the really smart ones use up all the ganja.
;-)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 14:21:31
That exactly is the way the Fitzceraldos these days argue. As i mentioned before 'I'm not saying it is audible, but it is audible'
I'm a 73 yr old elite hearing athlete, use ML large panel low efficiency planar speakers, which zero chance of either >16bit dynamic range or "hypersonic" capability...and I can "hear" High as a kite Re$ just fine. I don't drink no Koolaid like those audiophools either!
Whether fancy wires, magic caps, Hi Re$ is even audible and worthwhile or not to you with your own choice of music formats is entirely up to you...not rational, objective, demonstrable science. Just "listen" and decide.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 14:39:54
In late 2014 Dr, Wakldrip perpared some files for people to use to demonstate the benefits of high resolution audio to themselves. 
He seems like a reasonable guy, so I asked him (http://www.realhd-audio.com/?p=5755) to provide some tracks specifically for demonstrating the limitations of Redbook in a consumer environment, with a system capable of both >16bit dynamic range and "hypersonic" capability.
Stay tuned...
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-12 14:44:50
elite hearing athlete
Nice! I never saw it sportsmanlike till now.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 16:05:12
Interestingly, and against intuition, I have found the less bright students need less pot, while the really smart ones use up all the ganja.
;-)

The audiophile pot smokers I know can't be pried away from their vinyl...
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: KozmoNaut on 2016-07-12 16:09:28
The audiophile pot smokers I know can't be pried away from their vinyl...

That's because a double album makes for a nice surface to clean their pot on :-)

And also maybe because "oh man, it just goes around and around and around and around..."
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Porcus on 2016-07-12 16:11:53
Correct. Reiss's meta study seems to fail on the grounds that the tests that were used to make up his study are not consistent with each other. IOW it's not a collection of tests of apples, but rather a conflation of tests of just about every fruit and vegetable in the store.

That is what makes meta-analysis something more than the mere vetting of data ...
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-12 16:24:52
<snip>
Isn't that the same question that Archimago asked, just put in other terms?

I think not at the webpage krabapple linked, where he only expressed that 52,x % is to low to justify the markup. The difference to my question is obvious. He could have instead argued with the ~60% of trained listeners, would that justify? No to mention that these numbers are estimates for population parameters. Individuals within the population will most likely do worse and better.

Quote
That is an unpractical and unrealistic proposal, as reasonable as it may at first look. I very much doubt that people will compare several different versions of the same titles before they buy (except for a few, obviously). If the kind of marketing that we are experiencing is any hint, then people will buy records predominantly based on what they believe, not what they hear.

Nevertheless it is what i recommend. In the same sense i recommend first to learn about the method of meta-analysis (systematic reviews) about the statistics and to carefully read and analyze what authors of meta-analysis have written and done. Especially in a forum section that proudly is labeled as "scientific discussion" .Obviously most people do something completely different; they believe strongly in something and everything they read will evaluated according to this believe structure- simply confirmation bias at work.
Nevertheless i recommend .......;)

Quote
The similarity you try to suggest doesn't go very far. Perhaps you should do some research first, did M&M (or AES) issue a press release? Did they try to convey a different interpretation of their result than what they had written in their paper?

No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication. And they "oversold" (copyright pelmazo) their work right in the published article.
"Unlike the previous investigations, our tests were designed
to reveal any and all possible audible differences
between high-resolution and CD audio.... "
(E. Brad Meyer, David R. Moran; J. Audio Eng. Soc., Vol. 55, No. 9, 2007 September, page 776)

Quote
The media are of course what they are. Their simplifications go either way. The crucial point here is how the author contributes to this. Reiss differs rather a lot from M&M in this regard.

I can´t agree at this point (just from a semantic point of view, provided the statistical analysis holds true).
The AES press release mentioned in the second paragraph the novelty of the analysis approach (which it is afaik in the audio field), the numbers were reported correctly as in the universites press release as well. Of course "training improves dramatically" reads dramatic at first, but as the "dramatic" increase _to_ 60% is reported directly in the same sentence ....

Quote
I think the criticism against M&M is to a large extent unfair.
"to a large extent" ? No, they received also unfair criticism, as Reiss does (look just at the comments in this thread), which is sad, especially if it is called "critic based on science" , but a lot of critique was justified.

Quote
While they certainly haven't produced a study that's beyond reproach, they have shown quite convincingly and conclusively what they set out to show: That the audiophile claims regarding the inadequacy of the CD format were not true. Their point wasn't a technical one about the CD format, but a check on the credibility of audiophile claims. As such, this result still stands and IMHO will continue to stand.

What you have stated above is a typical "post-hoc-hypothesis" based on the actual results (reminds to the texican sharp shooter fallacy) and it wasn´t what they pretended to do (see the quote above) and, due to the serious flaws, further conclusions are highly questionable. As stated before, after reading the article i could not believe that it could have passed the review process and that fealing was emphasized after reading the supplementary information on the BAS website. No accurate measurements, detection of a broken player (not mentioned in the JAES afair) without knowing about the number of trials done in broken state, no tracking of the music used, different number of trials for the listeners, no information about the number of trials done on each location, no follow up although subgroup analysis showed some "suspicous" results and so on.

Don´t get me wrong- i have great respect for everybody doing this sort of experiment, because it is a lot of work, but otoh - given the fact that there is a plethora of literature covering DOE and sensory tests - i don´t understand why such simple errors, which could have been easily avoided, were still made.

Quote
Reiss picks up a criticism from Jackson et.al. when speculating about alleged "cognitive load" problems with ABX testing. Neither have shown anything here, it is mere speculation, and a quite disingenious speculation to boot. They actually seem to criticise a very old form of ABX testing, apparently unwilling to note that the problem they speculate about has been addressed a long time ago. This sort of blinkered criticism almost inevitably raises suspicions of malicious intent.

Is it what Reiss did or is it a strongly biased interpretation ? ;)
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data. Reiss analysis did not show any significant impact of the test protocal and so he reported "if methodology has an impact, it is likely overshadowed by other differences between studies"
(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 372)

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Porcus on 2016-07-12 16:27:57
I think the Big Issue is what ajinfla has touted repeatedly:

Quote
a small but important advantage in its quality

First, you should not use "important" when you mean "significant" (in the statistical sense). That is what you would expect from a dumb machine translating back and forth.
Second, you should not use "advantage" for "difference", as long as you have not ruled out audible adverse artifacts sub-22 kHz. That is what you would expect from a dumb machine interpreting TOS8.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-12 16:51:19
<snip>
First, you should not use "important" when you mean "significant" (in the statistical sense).

To quote Dr. Reiss from the AES press release:
"“Audio purists and industry should welcome these findings,” said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

Quote
Second, you should not use "advantage" for "difference", as long as you have not ruled out audible adverse artifacts sub-22 kHz. <snip>

That is a valuable concern; analysis will show to what extend.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Porcus on 2016-07-12 17:16:38
[...] around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

This, I assume, mixes up two concepts. Statistical significance means that we have so many data points that we can be confident that these what-looks-like-sixty, is not fifty. (Oh, that was a rough one.)

And if the actual number is indeed 52.3, it does not mean that there is no practical relevance. Not that I suggest that the situation is as follows, it is just for the sake of the illustration:
If we have done so many tests that we have virtually no confidence interval around it, it could be that one in twenty samples had extreme artifacts. (That would mean that out of 100, there are 95 where you guess fifty-fifty, accumulating a score of 47.5, plus the 5 where you are nearly universally right.)  I would call that relevant in practice.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 17:19:50
Quote
Reiss - these results imply that, though the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and playback chain is affected by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown

He could have instead argued with the ~60% of trained listeners
Trained at hearing what?

No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.
Step away from the ganja/crack pipe please Jakob2 (http://www.diyaudio.com/forums/members/jakob2.html).
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 17:31:51
To quote Dr. Reiss:
Quote
In summary, these results imply that, though the effect is perhaps small and difficult to detect but important advantage, the perceived fidelity of an audio recording and playback chain is affected quality of reproduction over standard audio content by operating beyond conventional consumer oriented levels. Furthermore, though the causes are still unknown, Trained listeners could distinguish between the two formats around sixty percent of the time, this perceived effect advantage can be confirmed with a variety of statistical approaches and it can be greatly improved through training.

FIFY



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-12 18:28:11
If the CD-format is really transparent..
Which of the selected tests were for that question Jakob2 ?

It was Dr. Reiss´s starting point, so why do you ask for selected tests?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 18:54:02
why do you ask for selected tests?
The question (you can't answer), was rhetorical
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 19:19:31
If the CD-format is really transparent..
Which of the selected tests were for that question Jakob2 ?
It was Dr. Reiss´s starting point, so why do you ask for selected tests?

Here's the logical starting  point for Reiss's paper, its abstract:

"There is considerable debate over the benefits of recording and rendering high resolution
audio, i.e., systems and formats that are capable of rendering beyond CD quality audio.
We undertook a systematic review and meta-analysis to assess the ability of test subjects to
perceive a difference between high resolution and standard, 16 bit, 44.1 or 48 kHz audio. All
18 published experiments for which sufficient data could be obtained were included, providing
a meta-analysis involving over 400 participants in over 12,500 trials. Results showed a small
but statistically significant ability of test subjects to discriminate high resolution content,
and this effect increased dramatically when test subjects received extensive training. This
result was verified by a sensitivity analysis exploring different choices for the chosen studies
and different analysis approaches. Potential biases in studies, effect of test methodology,
experimental design, and choice of stimuli were also investigated. The overall conclusion
is that the perceived fidelity of an audio recording and playback chain can be affected by
operating beyond conventional levels."

I see no mention of tests of any media, CD or not, for transparency.  Perhaps, if such a thing exists, you could quote it from his paper?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-12 20:08:53
[...] around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

This, I assume, mixes up two concepts. Statistical significance means that we have so many data points that we can be confident that these what-looks-like-sixty, is not fifty. (Oh, that was a rough one.)

No. Statistically significant means in our case that the probability to get an observed result by chance is lower than our predifined criterion (i.e. level of significance).
Having a big sample size means that smaller differences get statistically significant; usual wisdom says that every experiment gives an significant result provided the sample size is big enough.
Thats why we should differentiate between statistical relevance and practical relevance.

Everthing else remaining the same, bigger sample size means the confindence intervall narrows as the variance gets lower.

Quote
And if the actual number is indeed 52.3, it does not mean that there is no practical relevance.

That´s why i wrote of "limited practical relevance" ....

Quote
<snip> I would call that relevant in practice.
Leaving aside for the moment what i suppose being a misunderstanding of confidence intervalls and level of significance, the meaning of "practical relevance" is another one.
It means that a difference is relevant in practical terms of usage in every day life.

60% compared to 50% is usually considered to be of practical relevance, hence (i assume) Dr. Reiss used the word "important" and not only statistically significant.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-12 20:15:12
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 21:03:40
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)


To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient
to capture all perceivable content from live sound.
This question of perception of high resolution audio has
generated heated debate for many years. Although there
have been many studies and formal arguments presented in
relation to this, there has yet to be a rigorous analysis of the literature. "

(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 364)

and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

(QMUL press release; http://www.qmul.ac.uk/media/news/items/se/178407.html)

All perceivable content from live sound is not the same thing as sonic transparency. You can hear all perceivable content, but if it is colored in any way, the timing is off,, or there are additional signals that were not in the original sound,  transparent reproduction remains elusive.

Reiss says:  "This question of perception of high resolution audio has generated heated debate for many years."

The audio my be perceived, but still be different from the original source.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-12 21:08:30
["Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."

The question he says he seeks to answer is whether or not there is a difference between the alleged high resolution recording and the CD. He's saying he is interested in any difference, not just differences in the direction of greater accuracy.

This is a mistake that the audiophiles he seeks to please make all the time. They don't use reliable absolute references, and are pleased with any difference using the presumption that  if its is different it has to be better.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-12 21:56:34
To quote Dr. Reiss:
"Yet many believe that this standard quality audio is sufficient to capture all perceivable content from live sound.
This question of perception of high resolution audio has generated heated debate for many years. Although there
have been many studies and formal arguments presented in relation to this, there has yet to be a rigorous analysis of the literature. "
and
"Dr Reiss explained:  “One motivation for this research was that people in the audio community endlessly discuss whether the use of high resolution formats and equipment really make a difference. Conventional wisdom states that CD quality should be sufficient to capture everything we hear, ...."
Careful your hands don't fall off waving them that hard Jakob2

Jakob2, what tests were for Redbook transparency and where was any "advantage" found, trained or not, hearing what exactly?
What is the "advantage" of Hi-Re$ ?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-12 22:45:24
I think not at the webpage krabapple linked, where he only expressed that 52,x % is to low to justify the markup. The difference to my question is obvious.
Archimago's question is: Is it worth it. You ask the same, and merely play with the actual numbers. If that is the obvious difference, fine.

Quote
Nevertheless it is what i recommend.
Fine, too. You're free to recommend what you want.

Quote
No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.
That's a grave accusation to make without any evidence.

Quote
And they "oversold" (copyright pelmazo) their work right in the published article.
"Unlike the previous investigations, our tests were designed
to reveal any and all possible audible differences
between high-resolution and CD audio.... "
(E. Brad Meyer, David R. Moran; J. Audio Eng. Soc., Vol. 55, No. 9, 2007 September, page 776)
That's looks more swanky when ripped out of context than in the paper. It is not difficult to figure out what they wanted to say if you read it with a slightly cooler mind.

Quote
I can´t agree at this point (just from a semantic point of view, provided the statistical analysis holds true).
The AES press release mentioned in the second paragraph the novelty of the analysis approach (which it is afaik in the audio field), the numbers were reported correctly as in the universites press release as well. Of course "training improves dramatically" reads dramatic at first, but as the "dramatic" increase _to_ 60% is reported directly in the same sentence ....
I don't understand how this is an answer to what I wrote, and I don't understand what you want to say (other than not agreeing with me).

Quote
No, they received also unfair criticism, as Reiss does (look just at the comments in this thread), which is sad, especially if it is called "critic based on science" , but a lot of critique was justified.
A lot of criticism relies on selective reading, goalpost shifting, strawman attacks and the like. But that's worth a different topic and shouldn't be repeated here.

Quote
What you have stated above is a typical "post-hoc-hypothesis" based on the actual results (reminds to the texican sharp shooter fallacy) and it wasn´t what they pretended to do (see the quote above) and, due to the serious flaws, further conclusions are highly questionable.
Their motivation and aim is quite clear when one reads their whole text instead of cherry picking convenient snippets. They start with the claims of superiority of SADC and DVD-A over the CD and set out to check them with controlled blind tests. Their aim is to cover the whole range of potential effects arising from the technical differences between the media, rather than focusing on a particular aspect like the wordlength. I don't see a problem with this, except perhaps with the exact wording in a few places.

Their method is largely appropriate for what they want to achieve. Specifically, testing a medium through a direct path vs. the same medium through a restricted path is the most logical and straightforward way to do this, because it removes all factors that could be attributed to different source material. Moreover, it is simple enough to be used by ordinary people who want to know for themselves whether they hear a difference between CD format and higher formats.

And it is worth a reminder that the proper way of fixing the flaws in a test is to run another test with the flaws fixed. In the ten years since the M&M tests were run, no audiophile interest group seems to have countered with anything even remotely appropriate. Isn't that telling something, too? Perhaps how cheap the criticism is in comparison to running a credible and convincing test? Who would be in a better position to do this than those who pretend to know what to listen for, what to listen with, how to test and how to evaluate it? Wouldn't that be much better than such a meta analysis?

Quote
As stated before, after reading the article i could not believe that it could have passed the review process and that fealing was emphasized after reading the supplementary information on the BAS website. No accurate measurements, detection of a broken player (not mentioned in the JAES afair) without knowing about the number of trials done in broken state, no tracking of the music used, different number of trials for the listeners, no information about the number of trials done on each location, no follow up although subgroup analysis showed some "suspicous" results and so on.
Before foaming at the mouth over this, it might be worthwile checking what the potential of these alleged flaws was to corrupt the result. There's hardly a study that couldn't be criticised in a similar way if only put under a similar amount of scrutiny. You know this better than anyone else: I know your talent and resolve for years now at arguing the oxygen out of the air when you don't like the conclusion.

But I agree that the AES review process leaves something to be desired. It shows in Reiss' paper, too. Apart from the problems with the content and its interpretation, what caught my eye quite quickly was the inconsistent way of referencing literature. Some of it is in the traditional JAES style using reference numbers, some is using a name and year style as is more typical of textbooks. Surely this would have registered in a review, if I notice it within minutes?

Quote
Don´t get me wrong- i have great respect for everybody doing this sort of experiment, because it is a lot of work, but otoh - given the fact that there is a plethora of literature covering DOE and sensory tests - i don´t understand why such simple errors, which could have been easily avoided, were still made.
It is especially difficult and laborious if you have to brace and defend your work against even the pettiest and defeatist objections that might appear.

Quote
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data. Reiss analysis did not show any significant impact of the test protocal and so he reported "if methodology has an impact, it is likely overshadowed by other differences between studies"
(Joshua D. Reiss; J. Audio Eng. Soc., Vol. 64, No. 6, 2016 June, page 372)
He certainly appears to have done that very selectively. He references and uses quite a number of papers without taking notice of the sometimes very substantial critique and debate they have attracted. For example take the papers by Kunchur.

For some reason, the M&M study is the only exception, and here he shows not only that he is aware of the debate and where it took place, he also picks exclusively the negative points. If he had done what you say in the usual impartial way, I wouldn't have a bone to pick. But alas, that's not how it turned out. It will be hard for him to shake off suspicions of bias this way.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Arnold B. Krueger on 2016-07-13 13:37:51
Is it what Reiss did or is it a strongly biased interpretation ? ;)

It was absolutely biased as pro high rez as Reiss presumably thought he could get away with. The major changes of the outcome of his study to magnify pro high rez evidence that have been pointed out between the actual paper and his press release  shows that quite clearly.

Quote
Reiss actually did, what an author of such a study routinely does, he cites the literature and tries to find out if any criticism is backed up by the data.

As I showed in another post, that is most definitely what Reiss didn't do.

Many of the pro-high rez test results he used have been thoroughly criticized, and in many cases quite effectively. You'd never know it from Reiss's paper.

In contrast Reiss used rumor and speculation to criticize studies that didn't support high rez as thoroughly as he presumably wanted.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-14 09:45:21
<snip>
All perceivable content from live sound is not the same thing as sonic transparency. You can hear all perceivable content, but if it is colored in any way, the timing is off,, or there are additional signals that were not in the original sound,  transparent reproduction remains elusive.

Normally i´d say agree to disagree, especially given the context, but maybe i miss something.

Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-14 10:24:07
<snip>
Nevermind the sad fact that the masters used for the "hi-res" versions are often tweaked compared to the masters used for CDs, giving an audible difference that people will attribute to the format. They could use the exact same master for both versions, but often they don't, either due to incompetence or malicious intent.

The only way to do a proper evaluation is to take a hi-res source and downsample it to CD quality yourself, so you can be sure of the provenance. Not very many people have the skills or inclination to do this.

But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-14 11:15:05
But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
Which of Reiss's selected papers had anything to do with end user content and what does the paper have to do with end user content?
Jakob2, each of your evasions of such questions, provide the answers.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-14 13:19:06
But,if one compares and the "hi-res" version is better he might think the markup is justified.
If doesn´t help that the same quality could have been offerend on a CD, if that doesn´t happen. The customer is only able to choose what is offered.
If the HiRes version is indeed produced to higher standards, the markup may be justified. The same business model was already used many years ago with the CD, where you could sometimes buy improved versions for a markup (i.e. for a different mastering). This also shows that this possibility doesn't depend on a different format, improved versions can be provided independently from the format used.

I don't think such practices show malicious intent or incompetence. As long as they communicate the facts correctly, they enrich the consumer's choice. I don't see how anybody can object this.

However, HiRes proponents seem to want to get to a point where consumers believe that the better quality depends on the HiRes format, in other words they work to establish the misconception that there is a direct relationship between perceived quality and the format. Once established, this misconception can be exploited to sell material in a HiRes format with a markup, even when it doesn't offer any quality advantage. At this point the customer is being duped into paying more for the same.

While this may be dismissed as speculation, there are increasingly convincing indications that this is actually happening. Effectively, the HiRes proponents are preparing the ground for this swindle, whether they are aware of it or not, whether they support it or not.

I believe that if we don't fight this attempt at deception, the entire pro audio profession will be affected by a credibility backlash, no matter whether guilty or not.

Quote
If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
Intent can only rarely be proven. We have to work on the basis of what people do and what positions they support.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-14 13:33:26
No they didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.

If the "malicious intent" is indeed true that should be critized but please based on facts sampled during a serious investigation.
You do know crack killed Applejack....right???
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Jakob1863 on 2016-07-14 14:09:11
<snip>
Archimago's question is: Is it worth it. You ask the same, and merely play with the actual numbers. If that is the obvious difference, fine.
Archimago first asks if the difference is worth the markup; and secondly he asks, if 52.3% accuracy rate in a research setting sound like a valuable proposition to grab "hi-res" .
This i addressed (as said before, the consumer is only able to compare what is offered and to buy or not to buy), asking which "accuracy rate" would justify ......

Quote
That's looks more swanky when ripped out of context than in the paper. It is not difficult to figure out what they wanted to say if you read it with a slightly cooler mind.

The same holds true for the press releases for Reiss´s meta-analysis.

Quote
A lot of criticism relies on selective reading, goalpost shifting, strawman attacks and the like. But that's worth a different topic and shouldn't be repeated here.

If the same people comment on similar issues in a very different way it should be mentioned here.

Quote
Their motivation and aim is quite clear when one reads their whole text instead of cherry picking convenient snippets.
"convenient snippets" in the case of "overselling". I don´t care so much about this topic if the same degree of "good will in interpretation" is applied in each case.

Quote
They start with the claims of superiority of SADC and DVD-A over the CD .......

Last time you said they addressed "audiophile claims" but in fact, they didn´t really specify what their target was. At least did Meyer/Moran seperate recording engineers claiming superiority from audiophiles claiming "whatever still to specify" ....

Quote
......and set out to check them with controlled blind tests.

And as usual if no research hypothesis is clearly specified the test regime (and level of control) isn´t as good as it should be.

Quote
Their aim is to cover the whole range of potential effects arising from the technical differences between the media, rather than focusing on a particular aspect like the wordlength. I don't see a problem with this, except perhaps with the exact wording in a few places.

That you don´t see a problem is a bit surprising, because you expressed strong concerns (even invalidity) wrt to Reiss´s meta-analysis as he did not focus on a "particular aspect like the wordlenght" ........

Notwithstanding that they (Meyer/Moran) combined wordlength and sampling frequency effects, they did not check if any "enhancement" was delivered at all.

Quote
Their method is largely appropriate for what they want to achieve.

It depends on the hypothesis; if it was the vague sort of "audiophile claims" that you mentioned the last time, it must have been a claim like "if a disc is labelled as hi-res it will under all circumstances , by all listeners at all times perceived as better as a downsampled to CD quality version" .

Because they did not check for "hi-res ness", they did not really check for the quality of reproduction, they did not provide positive controls, they did not really track which music was used in the trials and so on.
And they used mostly a 10 trials per listener approach, which is surprising because they should have known at least since Leventhals articles about power, that a small number of trials is accompanied by a large risk of error type II. 

Quote
Specifically, testing a medium through a direct path vs. the same medium through a restricted path is the most logical and straightforward way to do this, because it removes all factors that could be attributed to different source material. Moreover, it is simple enough to be used by ordinary people who want to know for themselves whether they hear a difference between CD format and higher formats.

I did not criticize their "path choice" but that they have not provided thorough measurements before starting the experiment and checking routinely during the experiment.

Quote
And it is worth a reminder that the proper way of fixing the flaws in a test....
Sorry, but first of all flaws must be mentioned, because bad science is bad science. It does harm the reputation of science if these methodological flaws were belittled.
What you proposed would be a real "cheap get out of jail approach" ;)

Quote
....is to run another test with the flaws fixed. In the ten years since the M&M tests were run, no audiophile interest group seems to have countered with anything even remotely appropriate. Isn't that telling something, too? Perhaps how cheap the criticism is in comparison to running a credible and convincing test? Who would be in a better position to do this than those who pretend to know what to listen for, what to listen with, how to test and how to evaluate it? Wouldn't that be much better than such a meta analysis?

Mhm, further research is recommended? Me thinks Dr. Reiss did not just recommend further research, but could, based on his meta-analysis give some advice to achieve better quality in experiments. :)

Quote
Before foaming at the mouth over this, it might be worthwile checking what the potential of these alleged flaws was to corrupt the result. There's hardly a study that couldn't be criticised in a similar way if only put under a similar amount of scrutiny. You know this better than anyone else: I know your talent and resolve for years now at arguing the oxygen out of the air when you don't like the conclusion.

What you forgot to mention is, that i did critisize experiments with the same scrutinity although i should like the results (according to your assumptions). And you should have mentioned, that i routinely recommended blind home listening experiments for any listener to learn about his perception and gave a lot of advice to improve the quality of listening experiments.

And you should have noticed, that i (nearly always) did emphasize beside all criticism that Meyer/Moran´s hypothesis ("hi-res" does not offer a perceivable difference or advantage compared to CD quality) might be true.

Quote
But I agree that the AES review process leaves something to be desired......
Something that you forgot to mention provided you liked the published findings (see for example Meyer/Moran, or did i miss it? ) ...

Quote
It shows in Reiss' paper, too.

I´m sorry, but the flaws in Meyer/Moran´s experiment (further more if the additional information is considered) are evident just by reading, provided the reviewer has any expience in the DOE. No complicated analysis was needed to realise that.
In the case of Reiss´s analysis a reviewer must imho do a lot of work to find something.

Quote
Apart from the problems with the content and its interpretation, what caught my eye quite quickly was the inconsistent way of referencing literature. Some of it is in the traditional JAES style using reference numbers, some is using a name and year style as is more typical of textbooks. Surely this would have registered in a review, if I notice it within minutes?

Do we really qualify questions of style as equally important as methodological flaws??

Quote
It is especially difficult and laborious if you have to brace and defend your work against even the pettiest and defeatist objections that might appear.

You mean something like "he shouldn´t have used important, but significant instead" or "his press release overemphasized this or that", or he (maybe) didn´t notice that others criticized an older variant of "ABX"?
Isn´t the fact, that we in this thread mainly discuss semantics instead of real errors in the analysis telling?
One poster even mentioned Leventhal´s publication in the JAES because he felt that Dr. Reiss might have given rise to the impression to be the first one to talk about the importance of Type II Errors !?

Quote
He certainly appears to have done that very selectively. He references and uses quite a number of papers without taking notice of the sometimes very substantial critique and debate they have attracted. For example take the papers by Kunchur.

Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.

Quote
For some reason, the M&M study is the only exception, and here he shows not only that he is aware of the debate and where it took place, he also picks exclusively the negative points.

Shouldn´t you complain instead that he was unfair to Kunchur? Without further discussion he excluded his test results from his meta-analysis. :)

He explained why Meyer/Moran got detailed remarks and why their result couldn´t be used generally but only in parts of the analysis. Nothing wrong with that and he even find encouraging words (at least in my opinion) in stating:
"However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions."

Quote
If he had done what you say in the usual impartial way, I wouldn't have a bone to pick. But alas, that's not how it turned out. It will be hard for him to shake off suspicions of bias this way.

May be you just want him to be biased.
He could have express quite stronger critique, but didn´t and you should be able to precisely argue where his reasoning to not include their results is wrong.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-14 14:48:39
based on his meta-analysis give some advice to achieve better quality in experiments. :)
It's been over 20yrs. When can we expect these "better quality in experiments" from believers/peddlers of Hi-Re$ ?? Who is burdened with such proof?
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-14 14:58:29
Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.
Jakob2, since you umm, "understand Kunchur (http://www.diyaudio.com/forums/lounge/280626-worlds-best-dacs-109.html#post4550224)", please explain how his test demonstrated lack of transparency of Redbook music and thus need for Hi-Re$.
As always, you evasion provides answers.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: rrod on 2016-07-14 15:46:27
If the HiRes version is indeed produced to higher standards, the markup may be justified. The same business model was already used many years ago with the CD, where you could sometimes buy improved versions for a markup (i.e. for a different mastering). This also shows that this possibility doesn't depend on a different format, improved versions can be provided independently from the format used.

The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: ajinfla on 2016-07-14 16:21:17
The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.
Ahh but you see, with the DSD or 20+ bits, you get the "artists intent".
With 16/44 you get a "smeared" mess.
Hence more $$
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-14 16:57:29
Archimago first asks if the difference is worth the markup; and secondly he asks, if 52.3% accuracy rate in a research setting sound like a valuable proposition to grab "hi-res" .
This i addressed (as said before, the consumer is only able to compare what is offered and to buy or not to buy), asking which "accuracy rate" would justify ......
If the consumer would base his purchase decision only on his own comparison between the different offerings, there would be no need for him to read any studies or meta studies. We both know this, and we can assume Archimago knows it, too. I have noticed your recommendation, which I commented on already.

Archimago's question therefore is relevant for those (the majority IMHO) who are being influenced by what they perceive as established "wisdom of the educated". The question is therefore, whether a study that has such a narrow outcome should play any significant role in a buying decision, and I understood your question as as pretty much the same, only looking at it from the other direction, namely whether a study with a more clear-cut result should play a role, and from what point on.

Either way, the question's purpose is obviously not to provoke a universal answer, but to make the reader consider where his/her confidence level would be.

But this is really just the type of hair-splitting argument which you are so fond of.

Quote
The same holds true for the press releases for Reiss´s meta-analysis.
No, even when reading the entire press release the message remains the same: This is the study the industry and audiophiles was waiting for, and it confirms their position. The reception of the press release I have seen across the internet picks up this message almost invariably. That's not their fault or distortion, that's exactly the gist of Reiss' message. It is not the result of his study, however.

It certainly doesn't look like an accident. Reiss isn't naïve, he knows what he's doing, I'm sure. One has to assume that the message as it was being picked up was pretty much the message he wanted to send.

Quote
I don´t care so much about this topic if the same degree of "good will in interpretation" is applied in each case.
If, as is often the case, one faces the choice whether to assume malice or incompetence as the reason for an act, the saying goes that one should choose incompetence. I don't think, however, that this is always the interpretation that represents the "better will". ;)

Quote
Last time you said they addressed "audiophile claims" but in fact, they didn´t really specify what their target was. At least did Meyer/Moran seperate recording engineers claiming superiority from audiophiles claiming "whatever still to specify" ....
I don't think that this distinction matters much in the context we're in here.

Quote
And as usual if no research hypothesis is clearly specified the test regime (and level of control) isn´t as good as it should be.

Quote
That you don´t see a problem is a bit surprising, because you expressed strong concerns (even invalidity) wrt to Reiss´s meta-analysis as he did not focus on a "particular aspect like the wordlenght" ........
The problem appears at the very moment when significance is found. Then, you would wish to know which particular aspect is responsible for the perceived difference. It is also of particular relevance in a meta analysis, because of their inherent sensitivity to "comparing apples with oranges".

Perhaps it surprises you (which I'm not sure of, your surprise may be a purely rethorical device), but I don't find reason for surprise here. It is actually quite simple: If a study that tests all aspects (wordlength, samplerate) at once doesn't find any significance, none of the individual aspects have been shown to have significance. If significance is found, however, you don't know very much because you still need to identify the reason. Had the M&M test resulted differently they would have had the problem, but as we all know that's not how it turned out. Reiss has the problem, and due to the fact that his study is a meta study he has it already when choosing the base studies.

Quote
Notwithstanding that they (Meyer/Moran) combined wordlength and sampling frequency effects, they did not check if any "enhancement" was delivered at all.
If no difference is perceived, the question is moot whether there was an enhancement.

Quote
It depends on the hypothesis; if it was the vague sort of "audiophile claims" that you mentioned the last time, it must have been a claim like "if a disc is labelled as hi-res it will under all circumstances , by all listeners at all times perceived as better as a downsampled to CD quality version" .
The discs used seem to have been examples of discs which audiophiles claimed to be audibly better than CD. The claims haven't been picked out of thin air by M&M.

And do I really have to rebuke your blatant exaggeration?

Quote
Because they did not check for "hi-res ness", they did not really check for the quality of reproduction, they did not provide positive controls, they did not really track which music was used in the trials and so on.
And they used mostly a 10 trials per listener approach, which is surprising because they should have known at least since Leventhals articles about power, that a small number of trials is accompanied by a large risk of error type II.
It is still very unlikely that - had there really been clearly audible differences between original and CD downsampled version - they would have slipped through.

Besides, how do you check for "hi res ness" if people (claimants) have varying notions of what this means? Which definition should you pick? How would you test? M&M did the sensible thing: They avoided the question by taking material that was being presented to them as being hi res. The fact that some of it wasn't, according to some people's definition, shouldn't have prevented finding audibility at least with some of the material.

The accusation is in the sense unfair, that M&M are being made responsible for something they are not responsible for, namely the vague definition of what constitutes high res.

If you think that it should be defined more stringently, and the material tested for compliance before being used in the test, then you ought to devise a corresponding test. It would be welcomed quite broadly, I trust.

Quote
I did not criticize their "path choice" but that they have not provided thorough measurements before starting the experiment and checking routinely during the experiment.
If the equipment "fault" you criticise should be the small linearity problem of one player that was discernible with one disk, I remind you to consider how and how much that could have compromised the result. I would not even go as far as calling this a fault, owing to its small scale. Using this as a pretext for dismissing the study is completely out of proportion, IMHO. I much rather have the impression that M&M dutifully rectified the problem once they became aware of it. I trust they would have questioned the results of their own study if they had come to conclude that the fault was of sufficient magnitude to affect the result.

Quote
Sorry, but first of all flaws must be mentioned
Which M&M did in the case of this player problem, at least as part of the supplementary information published on their website. Had they considered the problem relevant, I believe they would have described it in the paper itself.

Quote
, because bad science is bad science. It does harm the reputation of science if these methodological flaws were belittled.
What you proposed would be a real "cheap get out of jail approach" ;)
I could feign surprise here that you don't object to Reiss' usage of a number of papers without even mentioning their flaws, but knowing you, I won't.

Quote
Mhm, further research is recommended? Me thinks Dr. Reiss did not just recommend further research, but could, based on his meta-analysis give some advice to achieve better quality in experiments. :)
I wouldn't have a problem with that. :)

Quote
What you forgot to mention is, that i did critisize experiments with the same scrutinity although i should like the results (according to your assumptions). And you should have mentioned, that i routinely recommended blind home listening experiments for any listener to learn about his perception and gave a lot of advice to improve the quality of listening experiments.
This description is just as selective as you accuse mine to be. ;)

Quote
And you should have noticed, that i (nearly always) did emphasize beside all criticism that Meyer/Moran´s hypothesis ("hi-res" does not offer a perceivable difference or advantage compared to CD quality) might be true.
You typically made this look as if you were saying that some material may not offer such a perceivable difference, in other words you have no guarantee that every hi res file is superior to CD quality. That's not a very courageous assertion. In such cases, when things are being asserted that should be self-evident, my mind can't help suspecting that there is another, subliminal message involved. ;)

Quote
Something that you forgot to mention provided you liked the published findings (see for example Meyer/Moran, or did i miss it? ) ...
I mentioned this only because you brought up the topic. I don't say that Reiss' paper is tainted by poor reviewing. It is not the reviewers who are responsible for Reiss' faults, and neither are they responsible for the faults in other papers. So why should I have mentioned it in the context of M&M, if the connection is coincidental rather than causal?

The problem is that such sloppy reviewing reduces the benefit of having a review in the first place. The designation of a paper as being "peer reviewed" may not mean much anymore.

Quote
I´m sorry, but the flaws in Meyer/Moran´s experiment (further more if the additional information is considered) are evident just by reading, provided the reviewer has any expience in the DOE. No complicated analysis was needed to realise that.
In the case of Reiss´s analysis a reviewer must imho do a lot of work to find something.
It depends on how familiar you are with the references Reiss used. If you don't know any of the papers, and have to go through them to see how their content matches up with what Reiss makes of it, then indeed you have a lot of work on your hands.

Quote
Do we really qualify questions of style as equally important as methodological flaws??
I certainly don't. I'm just wondering why this wasn't caught in review. In no way do I suggest that this amounts to a methodological flaw, or something of equal importance. Again: I can distinguish between the quality of a paper and the quality of the review.

Quote
You mean something like "he shouldn´t have used important, but significant instead" or "his press release overemphasized this or that", or he (maybe) didn´t notice that others criticized an older variant of "ABX"?
Isn´t the fact, that we in this thread mainly discuss semantics instead of real errors in the analysis telling?
We are discussing both, but it is hard to avoid going into discussions about semantics when you are involved. Don't accuse anyone else for something that you bring with you. ;)

Quote
One poster even mentioned Leventhal´s publication in the JAES because he felt that Dr. Reiss might have given rise to the impression to be the first one to talk about the importance of Type II Errors !?
It wasn't me, I trust.

Quote
Sorry, pelmazo, you mentioned a specific issue and i addressed that specific issue. Please don´t mix up this specific case with others.
I don't mix it up, I put it into perspective.

Quote
Shouldn´t you complain instead that he was unfair to Kunchur? Without further discussion he excluded his test results from his meta-analysis. :)
He excluded them as two of 11 studies that were testing auditory perception resolution. Table 1 shows that quite clearly. There is a bit of discussion about this, and Reiss notes that they may suggest the underlying causes of discrimination, if there should be any. I am OK with this choice and its justification.

I am more critical of his usage of Kunchur's works as support for the suggestion that humans have a monaural temporal timing resolution of 5 µs. He uses language that keeps him neutral regarding these claims, but his presentation ignores all criticism that has been voiced. I don't think that's OK. Uncritical mentioning of dubious references increases their perceived credibility, without adding any argument or evidence in their favor.

Quote
He explained why Meyer/Moran got detailed remarks and why their result couldn´t be used generally but only in parts of the analysis. Nothing wrong with that and he even find encouraging words (at least in my opinion) in stating:
"However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions."
I can't help suspecting that some of the given reasons for exclusion were used just because they had been available. In other studies, such information (for example the placement of the players) wasn't given, which of course doesn't mean that there coudn't have been a problem. The argument that the SACD obscures frequencies above 20 kHz is particularly peculiar, since he otherwise included studies that tested wordlength effects, and had no extended frequency range, either.

It is true, however, that M&M didn't make sure that their material actually contained extended frequencies and/or extended dynamic range. The list of material is given on their supplementary website, but going through it and analyzing them would presumably have been excessively laborious. I don't see this as a valid criticism of their test, but it does justify excluding the test from the meta-analysis.

Quote
May be you just want him to be biased.
He could have express quite stronger critique, but didn´t and you should be able to precisely argue where his reasoning to not include their results is wrong.
I don't criticise his decision to exclude M&M's test from the meta analysis. I criticise his rather one-sided assessment of it.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-14 17:15:31
The issue I have is that studios were previously willing to do things like record in DSD or 20+ bits and then sell me a nicely decimated and noise-shaped 16-bit rendition on actual physical media with a printed booklet for $10-15. Now they want to forgo the media and the printing and charge me $20-30 to get a master they haven't deliberately screwed up for whatever reason. Stereo "hi-res" products should arguably cost *less* than CDs did, but when people buy into the "bigger numbers sound better" then the opposite ends up true.
You make the mistake of extrapolating selling prices from manufacturing cost. We're far from this model in many business areas, and you could argue that in the record business, it never was so.

Alternatively, you could say that the higher price of hires records is justified by their higher marketing cost. Having famous artists spout nonsense about how important hires is, in a youtube clip, doesn't come for free, for example.

The real bugger is when you get same screwed-up version in either format, except for the price difference. The more widely hires penetrates the market, the more we will see this kind of scam, I fear. By going mass market, I think the hires industry is defeating itself at the end. The quality level will be as low as before, the reputation will be ruined at least as thoroughly as before, and the prices won't be kept up, either.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Wombat on 2016-07-14 17:35:29
M&M was even way ahead of time! He disproved the en vogue dsd upsampling delusion. If the SACD layer was done with lowres sources you should at least clearly hear the blackened blacks from the dsd conversion on the SACD layer ;)
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: rrod on 2016-07-14 18:38:40
You make the mistake of extrapolating selling prices from manufacturing cost. We're far from this model in many business areas, and you could argue that in the record business, it never was so.

Alternatively, you could say that the higher price of hires records is justified by their higher marketing cost. Having famous artists spout nonsense about how important hires is, in a youtube clip, doesn't come for free, for example.

The real bugger is when you get same screwed-up version in either format, except for the price difference. The more widely hires penetrates the market, the more we will see this kind of scam, I fear. By going mass market, I think the hires industry is defeating itself at the end. The quality level will be as low as before, the reputation will be ruined at least as thoroughly as before, and the prices won't be kept up, either.

Yeah but "money for Rick Rubin to say we're getting better fidelity" isn't where I want my extra 10 simoleons to go. I would like it to go towards surround/object mixes and ending the loudness war, but the hi-res community is just bound and determined to demand new, horrible-sounding stereo mixes of albums that already have them, but with moar bitz :(

I do hope your last paragraph is right and that this all comes crashing down sooner than later. I know the average person I know doesn't care a whit about mixing and mastering issues but certainly does care when extra cash comes out of their wallets. We shall see.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-14 20:02:23
It depends on the hypothesis; if it was the vague sort of "audiophile claims" that you mentioned the last time, it must have been a claim like "if a disc is labelled as hi-res it will under all circumstances , by all listeners at all times perceived as better as a downsampled to CD quality version" .

Because they did not check for "hi-res ness", they did not really check for the quality of reproduction, they did not provide positive controls, they did not really track which music was used in the trials and so on.
And they used mostly a 10 trials per listener approach, which is surprising because they should have known at least since Leventhals articles about power, that a small number of trials is accompanied by a large risk of error type II. 


If the many, many, strongly reported claims of vast and obvious inherent audio superiority for hi rez over a span of decades -- the very lifeblood of its marketing -- were accurate reflections of reality, detecting difference for hi-rez  would not be a 'threshold' phenomenon.  M&M's experiments (not to mention all the others used in Reiss's MA) would have revealed a difference with extremely robust statistical support, far stronger than what Reiss abstracted from even the best-case MA interpretation. 
But they never have.  Ever.  Do you seriously think this is due to Type II error?

If not, you agree with Archimago's point, and why are you wittering on about this?


Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-14 20:08:38
based on his meta-analysis give some advice to achieve better quality in experiments. :)
It's been over 20yrs. When can we expect these "better quality in experiments" from believers/peddlers of Hi-Re$ ?? Who is burdened with such proof?

Obviously, 'better quality experiments' are needed to reveal the marked improvement hi rez provides as a consumer delivery format.  It's really real.  The problem is that dozens of researchers over the years have simply not figured out the way to properly replicate what any audiophile and Stereophile reviewer hears in their untreated rooms listening to their hi rez recordings of various production provenance, over loudspeakers with vastly different performance.

As soon as we have those experiments, everything Sony and Phillips and Neil Young claimed back when about SACD and DVDA will finally be true. Promise.



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-15 06:30:43
It's been over 20yrs. When can we expect these "better quality in experiments" from believers/peddlers of Hi-Re$ ?? Who is burdened with such proof?
Apparently it's still on those of us who are skeptical, because we're
just the other camp of believers.
Remember, all failed attempts at finding unicorns are just type-ii errors because the wise and unbiased folks running the show over at the high-res cheer leading squad AES know that unicorns do in fact exist.  Don't believe them?  See Hear for yourselves!
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: Thad E Ginathom on 2016-07-15 09:05:07
All these pages and none of us yet thought of stopping at the title. The reference to higher-sample-rate audio as "high-resolution."

This term, stolen from video by marketing men is, simply, a non-technical marketing lie, isn't it?

But the success of those marketing men is such that even those who do not believe in it; even those who have the scientific education to know that it is wrong, yea, even unto the hallowed threads of hydrogenaudio, are using that term.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: drewfx on 2016-07-15 17:22:00
All these pages and none of us yet thought of stopping at the title.

I pretty much stop at the idea that one would even need to do a meta analysis is an admission that, at best, it's a borderline question applicable only to borderline cases.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-15 17:28:04
That's too vague.

Borderline cases meant hearing the affects of ultrasonic content in the audible band, without any interest (let alone effort) in narrowing the cause.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: drewfx on 2016-07-15 17:48:00
Yes. By "at best" I mean that even if we ignore any obvious questions and give them every benefit of every possible doubt.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-15 17:55:30
And in doing so you've lost sight of the reality of the situation, which is exactly what industry advocates are preying upon.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: drewfx on 2016-07-15 18:22:48
No. I'm not conceding anything. I'm pointing out that their supposed proof, even if we were to accept without reservations, doesn't match their claims of importance anyway.

By relying on a meta analysis they are the ones conceding that they have not a single compelling study backing their claims that stands on is own over a period spanning decades.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-15 22:33:31
I'm concerned that it is the narrative that is being conceded; but yeah, this "meta-analysis" is a complete and utter joke, though I'm pretty sure I already made that abundantly clear.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: krabapple on 2016-07-16 00:34:55
I'm concerned that it is the narrative that is being conceded; but yeah, this "meta-analysis" is a complete and utter joke, though I'm pretty sure I already made that abundantly clear.


OK, sorry, I don't see how it's more of a 'joke' than any other meta-analysis (if you think MAs are a joke inherently, well that's another debate).  It's actually well done, the only real issue would be if one were to hysterically over-interpret its rather meager (and unexplained)  findings of difference, which the hi rez cheerleading squad are sure to do  (and sadly Reiss himself already seems to have done, though mildly, with his press release statement)

You, me, and everyone we know here recognize that nothing in this paper supports the audiophile rhetorical party line, i.e., OMG veils lifted, creamier bass, its like they're in the room with me now, even my wife could hear it, Redbook is 'low rez', etc.   Good luck transmitting that news to the public though.



Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: greynol on 2016-07-16 00:59:26
Well done?  Try over-done.  This whole thing reeks: generate buzz in order to promote the sales of uselessly bloated media (at a premium price) and equipment to play it on.  Yeah, it's a joke; a complete perversion.

Perhaps I do take issue with meta-analyses in general, but look at the collection of studies and tell me they weren't chosen specifically to strengthen the results of the BS "typical" filters report.  128kbit mp3 vs. hi-res?!? C'mon!
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Thad E Ginathom on 2016-07-16 07:24:56
I'm down-grading my "layman's view" to complete waste of time just to keep this stuffed being talked about, which supports commercial interests."

Speaking of which, not that I am suggesting that this is one of those red-wine-is-good-for-you jobs, but was this work funded? Do people get paid for doing this stuff?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-07-16 07:34:06
The author of this study undoubtedly gets paid to do lectures and has a vested interest in the health and well-being of the hi-fi industry (to which the AES is all but beholden) which apparently has chosen hi-re$ as its prime marketing strategy.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-16 18:03:16
<snip>
If the many, many, strongly reported claims of vast and obvious inherent audio superiority for hi rez over a span of decades -- the very lifeblood of its marketing -- were accurate reflections of reality, detecting difference for hi-rez  would not be a 'threshold' phenomenon.

I agree to some degree (although pelmazo surely would call that point a blatant exaggeration :) ) , but it would definitively better to keep the various topics seperated. Marketing hype is one topic and a perceivable difference (or even advance) is another one.

Beside "night and day differences" everything else is not easy to detect within a controlled listening experiment, provided we are talking about multidimensional perception. That is the reason why it is mandatory to incorporate positive controls (for other reasons negative controls as well), and any experimenter should think about listener accomodation and even training.


Quote
.....M&M's experiments (not to mention all the others used in Reiss's MA) would have revealed a difference with extremely robust statistical support, far stronger than what Reiss abstracted from even the best-case MA interpretation. 
But they never have.  Ever.  Do you seriously think this is due to Type II error?

"all the others" is an exaggeration, because there aren´t that many, especially not sound attempts. Wrt Meyer/Moran i don´t think it is due to Type II errors (in the statistical sense), but roughly 25 years after Leventhals articles it is imho telling that we can´t notice any improvement in this point. Furthermore if it was not given by experimental guidelines that every participants really did 10 trials.
In their experiments not only the format was an independent variable but the locations, listeners and music tracks were variables too. Although the sample size seems to be big at first, if you consider the additional variables it isn´t. M&M listed 19 different discs that were used during the tests (how many possible tracks in total? how many tracks were used?) In fact the article and additional information could not answer that. Afair in a forum one of the authors said one disc was used during roughly the half of the trials. As apparently nobody noted this (imho quite important information) that might be correct or just a memory error.


Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-16 18:17:47
<snip>
OK, sorry, I don't see how it's more of a 'joke' than any other meta-analysis (if you think MAs are a joke inherently, well that's another debate).  It's actually well done, the only real issue would be if one were to hysterically over-interpret its rather meager (and unexplained)  findings of difference, which the hi rez cheerleading squad are sure to do ......

That is an reasonable assessment and i don´t really understand the nearly "hysteric" critique. If we think that "the press" (and forums as well) routinely over-interpret, which was clearly also given when M&M presented their results, why this wittering about it?

Quote
....(and sadly Reiss himself already seems to have done, though mildly, with his press release statement)

It was so mildly that "sadly" is already exaggerating....

Quote
You, me, and everyone we know here recognize that nothing in this paper supports the audiophile rhetorical party line, i.e., OMG veils lifted, creamier bass, its like they're in the room with me now, even my wife could hear it, Redbook is 'low rez', etc.   Good luck transmitting that news to the public though.

Please help me wrt this forum. This section is labelled as "scientific discussion" and the topic was Dr. Reiss´s meta-analysis. The topic wasn´t any "audiophiles rhetorical party line" nor wild speculations about financial interests or mass market influences. What is going on ??

P.S. Ok given the recent restatement of the thread title, i understand that at least one of the moderators must have super powers and therefore KNOWS the TRUTH. Please forgive the sarcasm, but "scam" in the new title?

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-17 08:45:18
I agree to some degree (although pelmazo surely would call that point a blatant exaggeration :) ) , but it would definitively better to keep the various topics seperated. Marketing hype is one topic and a perceivable difference (or even advance) is another one.
I don't call krabapple's sentence an exaggeration. I think he's spot on. The exaggeration is wholly on the side of the audiophile marketing.

I'm sure most participants here are able to distinguish between marketing and science at least as well as Reiss himself, but the case at hand shows how they are linked. So why shouldn't this be a topic for the discussion here in this thread?

Quote
Beside "night and day differences" everything else is not easy to detect within a controlled listening experiment, provided we are talking about multidimensional perception. That is the reason why it is mandatory to incorporate positive controls (for other reasons negative controls as well), and any experimenter should think about listener accomodation and even training.
Please explain "multidimensional perception".

Are you saying that something that is easily perceivable under normal conditions is all of a sudden difficult to detect in a controlled listening experiment? If so, I want to know why, including hard evidence that it is so, and I want it explained without waffling, please.

Quote
"all the others" is an exaggeration, because there aren´t that many, especially not sound attempts. Wrt Meyer/Moran i don´t think it is due to Type II errors (in the statistical sense), but roughly 25 years after Leventhals articles it is imho telling that we can´t notice any improvement in this point. Furthermore if it was not given by experimental guidelines that every participants really did 10 trials.
Given how much Leventhal's contribution was warmly received by audiophile apologists, I'd say it is telling how little the audiophile side has made of it. If they really believe that Leventhal's demur was crucial, one would expect that they design an experiment accordingly, and present the much desired result. Instead, we have you and others complain that nobody listens to Leventhal. One could be excused for suspecting that you don't trust much that this would make a difference, and prefer it not to be tested in earnest, because as long as it remains so, you can use Leventhal to criticise others from the comfort of your armchair.

Quote
In their experiments not only the format was an independent variable but the locations, listeners and music tracks were variables too. Although the sample size seems to be big at first, if you consider the additional variables it isn´t. M&M listed 19 different discs that were used during the tests (how many possible tracks in total? how many tracks were used?) In fact the article and additional information could not answer that. Afair in a forum one of the authors said one disc was used during roughly the half of the trials. As apparently nobody noted this (imho quite important information) that might be correct or just a memory error.
Wait a minute! We are discussing a meta study here that attempts to combine several studies whose experiments were even more diverse, and the variables were even more varied. Yet it is M&M that you are criticising. Do you not realize how blinkered this comes across?

That is an reasonable assessment and i don´t really understand the nearly "hysteric" critique. If we think that "the press" (and forums as well) routinely over-interpret, which was clearly also given when M&M presented their results, why this wittering about it?
I have got some more criticism than krababble, as already presented, so I believe he's a bit on the clement side. Reiss' tendency to reference and present even wacko scientists with no reference to criticism at all, is something I find both telling and unacceptable.

I'm glad, however, that you find the critique only "nearly hysteric". That already sets it apart from the way the M&M study was and is being received in the audiophile press. That's not only hysteric, it is sometimes bordering on character assassination.

Quote
Please help me wrt this forum. This section is labelled as "scientific discussion" and the topic was Dr. Reiss´s meta-analysis. The topic wasn´t any "audiophiles rhetorical party line" nor wild speculations about financial interests or mass market influences. What is going on ??
It is not unscientific to look at the context, is it? ;)

Quote
P.S. Ok given the recent restatement of the thread title, i understand that at least one of the moderators must have super powers and therefore KNOWS the TRUTH. Please forgive the sarcasm, but "scam" in the new title?
I wouldn't have renamed it, but greynol (assuming it was him) certainly has superpowers, and he knows the truth much better than many others, if only in lowercase. ;)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-17 12:44:36
No they (M&M) didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.

i don´t really understand the nearly "hysteric" critique.

Please help me wrt this forum. This section is labelled as "scientific discussion" and the topic was Dr. Reiss´s meta-analysis. The topic wasn´t any "audiophiles rhetorical party line" nor wild speculations about financial interests or mass market influences. What is going on ??
That's a good question. Are you just a believer, or do you have some skin in the Hi-Re$ game?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-07-17 22:00:42
greynol [...] certainly has superpowers
Any statistical book reports attempting to imply otherwise will only be fraught by type-ii errors.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2016-07-17 22:38:03
Beside "night and day differences" everything else is not easy to detect within a controlled listening experiment, provided we are talking about multidimensional perception.

And yet rather easy for the eager consumer or Stereophile reviewer to detect, eh?  After  all, we have tons of testimony to that effect


Quote
That is the reason why it is mandatory to incorporate positive controls (for other reasons negative controls as well), and any experimenter should think about listener accomodation and even training.


You're really not getting the point .


Quote
Quote
.....M&M's experiments (not to mention all the others used in Reiss's MA) would have revealed a difference with extremely robust statistical support, far stronger than what Reiss abstracted from even the best-case MA interpretation. 
But they never have.  Ever.  Do you seriously think this is due to Type II error?

"all the others" is an exaggeration, because there aren´t that many, especially not sound attempts.

Be more pedantic why don't you?  OK then, the experiments that used ostensibly 'hi rez' sources and compared them to standard rate audio.

Quote
Wrt Meyer/Moran i don´t think it is due to Type II errors (in the statistical sense), but roughly 25 years after Leventhals articles it is imho telling that we can´t notice any improvement in this point.

It's especially notable that the hi rez cheerleading side haven't provided any such experiments, given that they seized on Type II errors as their savior back in the day and will do so again per Reiss.  But the again, the most vocal of them were also opposed to blind testing , period, for the longest time.  I'm sure now they'll be OK with it now though, since all (be sure to check me on that) of the work he cites used blind protocols of some type. .

(But then *again*, IIRC Reiss did mention the 'cognitive load' line with a straight face -- an other effect that seems to operate only when doing comparison under experimental conditions, rather than when carefully auditioning a new SACD or DVDA or HDtracks download for review --  so you never know. )

Quote
In their experiments not only the format was an independent variable but the locations, listeners and music tracks were variables too. Although the sample size seems to be big at first, if you consider the additional variables it isn´t. M&M listed 19 different discs that were used during the tests (how many possible tracks in total? how many tracks were used?) In fact the article and additional information could not answer that. Afair in a forum one of the authors said one disc was used during roughly the half of the trials. As apparently nobody noted this (imho quite important information) that might be correct or just a memory error.

Who cares?  It certainly didn't matter to the people whose reports M&M were addressing...the people who were ALREADY claiming to hear easily notable differences auditioning even *analog sourced* SACDs/DVDAs and their Redbook counterparts....and ALREADY attributing it to 'hi rez' rather than , e.g., different mastering, different playback levels, sighted bias, etc.

M&M took these people at their word.  And *after* that, these people picked up the goalpost and marched downfield, proclaiming that only pure hi rez would suffice for proof, thanks very much




Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-18 00:06:31
Quote
Furthermore, though the causes are still unknown - Reiss

That is the reason why it is mandatory to incorporate positive controls

What positive controls?? Be specific.
That or put on your Hi-Res glasses and reread what Reiss wrote.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-18 12:24:32

<snip>
First, you should not use "important" when you mean "significant" (in the statistical sense).

To quote Dr. Reiss from the AES press release:
"“Audio purists and industry should welcome these findings,” said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.”

The press release was not peer reviewed. Reiss has said that he takes sole responsibility for it.  Therefore its credibility is very limited. It seems to be 1 mans opinion.  This pleases me because as an AES member, I take some responsibility for AES peer reviewed papers, but not for press releases.

Quote
52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

The meaning of relevance is circumstantial.  For example, a new medication that saves lives 60% of the time it is administered would be considered by medical science to to be have some value. An driverless car that avoids causing a serious accident only 60% of the time it is driven around town would be probably considered to be worse than useless.

I have previously documented 4 failures of working (in a signals analysis sense) high resolution audio technologies to be commercially viable.

I think that we need to weigh our experimental results in terms of their practical impact.

We have practical examples of media performance upgrades that seem to have been generally successful that as far as I know have not been subjected to the kind of precise scrutiny that high res audio has received:  Vinyl and consumer analog tape -> CD audio  and  S-VHS audio/video -> DVD A/V AKA  MPEG-2 video + Dolby Digital audio.

It seems to me that if people want the kind of commercial success that CD Audio and DVD audio/video have obtained in the market place, comparable  audible quality over CD audio and DVD audio needs to be provided.


Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-18 12:51:21
You, me, and everyone we know here recognize that nothing in this paper supports the audiophile rhetorical party line, i.e., OMG veils lifted, creamier bass, its like they're in the room with me now, even my wife could hear it, Redbook is 'low rez', etc.  Good luck transmitting that news to the public though.

The paper was peer reviewed and says approximately what you say - "...Nothing in this paper supports the audiophile rhetorical party line, i.e., OMG veils lifted, creamier bass, its like they're in the room with me now, even my wife could hear it, "

However we can't separate it from Reiss's AES press release:

http://www.aes.org/press/?ID=362 (http://www.aes.org/press/?ID=362)

"Research Finds Audible Differences with High-Resolution Audio"

"“Audio purists and industry should welcome these findings,” said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. "

Quote
Please help me wrt this forum. This section is labelled as "scientific discussion" and the topic was Dr. Reiss´s meta-analysis. The topic wasn´t any "audiophiles rhetorical party line" nor wild speculations about financial interests or mass market influences. What is going on ??

Please help with your your reading ability.

How does "Audio purists and industry should welcome these findings,” said Reiss.“

fail to support "...audiophiles rhetorical party line  ... wild speculations about ... mass market influences."
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-18 12:57:28

Beside "night and day differences" everything else is not easy to detect within a controlled listening experiment, provided we are talking about multidimensional perception. That is the reason why it is mandatory to incorporate positive controls (for other reasons negative controls as well), and any experimenter should think about listener accomodation and even training.

What is your authority for making the claim that  "Beside "night and day differences" everything else is not easy to detect within a controlled listening experiment, provided we are talking about multidimensional perception"

I've done many experiments related to multidimensional perception in controlled listening experiments, and if the difference is often easy to accurately identify.  Based on your former comments, I don't think you have any hands-on experience in this area, nor can you cite any relevant authoritative publications to support your claims. Like Reiss' press release, your comments appear to be just one (inexperienced and uneducated)  man's opinion.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-18 13:37:50

To quote Dr. Reiss from the AES press release:
"“Audio purists and industry should welcome these findings,” said Reiss. “Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content. Trained listeners could distinguish between the two formats around sixty percent of the time.”

52.3% would qualify as statistically significant but of very limited practical relevance; 60% is usually considered to be of practical relevance.

If one understands how experiments and statistics work, one would not say that a test with 60% correct responses is the same as  "distinguish(-ing) between the two formats around sixty percent of the time.”

The first thing to remember is that in an experiment with 50% correct responses and 2 alternatives indicates 100% random guessing.

In contrast, an experiment with 100% correct responses indicates 0% random guessing.

The thing to take away from examination of these two facts is that there is a non-linear sliding scale of actual percentage of correct identifications ranging from 0% to 100% while distinguishing between the two alternatives from 50%  to 100% of the time. 

Listeners who get correct responses only 60% of the time are mostly guessing randomly. Only 1 in 6 responses is anything but random guessing. They are actually giving correct responses only 16% of the time. The rest of the time they are guessing randomly.

If you try to apply these results to the real world, the results might be very depressing. For example If you buy 6 high resolution recordings, based on these statistics, only one of the recordings will sound different or they will all sound different only 1/6 of the time or something like that. I don't foresee anybody selling a lot of recordings based on performance like that.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-20 13:06:41
<snip>
But this is really just the type of hair-splitting argument which you are so fond of.

Seems that i´m not alone .....  :)

Quote
No, even when reading the entire press release the message remains the same: This is the study the industry and audiophiles was waiting for, and it confirms their position. The reception of the press release I have seen across the internet picks up this message almost invariably. That's not their fault or distortion, that's exactly the gist of Reiss' message. It is not the result of his study, however.

Especially when reading the entire press release, the message is quite different. Beside the central point that the two formats can be differentiated, further research is recommended (and needed).

Quote
If, as is often the case, one faces the choice whether to assume malice or incompetence as the reason for an act, the saying goes that one should choose incompetence. I don't think, however, that this is always the interpretation that represents the "better will". ;)

Well, wrt to Meyer&Moran you obviously did not only choose between "incompetence and malice"  but followed another route. It would be just fair to adress the same "good will" to Reiss´s work.

Quote
I don't think that this distinction matters much in the context we're in here.

It shows that they not only wanted to address some (unspecified) exaggerated audiophile claims, but any asserted benefit in reproduction as well.

Quote
The discs used seem to have been examples of discs which audiophiles claimed to be audibly better than CD. The claims haven't been picked out of thin air by M&M.

And do I really have to rebuke your blatant exaggeration?
I haven´t found any information if any of those "audiophiles claiming outrageous things" were participating in the tests.
Meyer&Moran confirmed the audiophile claims wrt to the sound quality of the "hi res material" (obviously they and most of the supporters of their experiment had no problems with this categorical statement although no controlled test was used) and the methodogical flaws prevent further conclusions.

It wasn´t really a "blatant exaggeration" (pointed emphasis maybe) thats why i listed the factors that led to this wording.

Quote
The problem appears at the very moment when significance is found.

Which in other words means that the validity of this research approach depends
on the result? Worth to reconsider ....

Quote
Then, you would wish to know which particular aspect is responsible for the perceived difference. It is also of particular relevance in a meta analysis, because of their inherent sensitivity to "comparing apples with oranges".

One should normally wish to know also in the case of a retained null hypothesis, especially if an experiments (like Meyer&Moran apparently did) compares "apples with oranges" too and lacks the advance of an extended data pool.

Quote
Perhaps it surprises you (which I'm not sure of, your surprise may be a purely rethorical device), but I don't find reason for surprise here.

Please try to avoid using Schopenhauer´s list...
My suprise was based on the fact as stated and your argument causes additional amazement.

Quote
It is actually quite simple: If a study that tests all aspects (wordlength, samplerate) at once doesn't find any significance, none of the individual aspects have been shown to have significance.

This conclusion is not warranted; it could be if all effects were balanced and if the sample size were sufficient, but both points were most probably not given.

Quote
It is still very unlikely that - had there really been clearly audible differences between original and CD downsampled version - they would have slipped through.

Which way did you calculate "unlikely" based on the information available?

Quote
Besides, how do you check for "hi res ness" if people (claimants) have varying notions of what this means? Which definition should you pick? How would you test? M&M did the sensible thing: They avoided the question by taking material that was being presented to them as being hi res. The fact that some of it wasn't, according to some people's definition, shouldn't have prevented finding audibility at least with some of the material.

As nobody noted the specific material used in the trials, how should one know? The number of trials per listeners was obviously to small, so missing a perceivable difference was quite likely.

Quote
The accusation is in the sense unfair, that M&M are being made responsible for something they are not responsible for, namely the vague definition of what constitutes high res.

The vague definition is not their fault, letting the vagueness influence the results.... is.....

Quote
If the equipment "fault" you criticise should be the small linearity problem of one player that was discernible with one disk, I remind you to consider how and how much that could have compromised the result. I would not even go as far as calling this a fault, owing to its small scale.

Please keep in mind that your consideration is based on pure anecdotical description. That´s why i wrote "no thorough measurment" .
I don´t know what other defects they might have not noticed, beside the problem with low level linearity.

Quote
Using this as a pretext for dismissing the study is completely out of proportion, IMHO. I much rather have the impression that M&M dutifully rectified the problem once they became aware of it. I trust they would have questioned the results of their own study if they had come to conclude that the fault was of sufficient magnitude to affect the result.

Which M&M did in the case of this player problem, at least as part of the supplementary information published on their website. Had they considered the problem relevant, I believe they would have described it in the paper itself.

To not provide thorough measurements before and in between is the flaw. And associated to this is the problem that nobody did note the number of trials done with this equipment.
We should not be entitled to trust in their considerations to mention something, All must be mentioned. The methodological requirements were based on the insight that experimenter bias could be a possible confounder. At least Brad Meyer was strongly biased against any perceivable difference between CD and "hi res" .

Quote
I could feign surprise here that you don't object to Reiss' usage of a number of papers without even mentioning their flaws, but knowing you, I won't.

Please refrain from using Schopenhauer´s list.
I was talking about quite obvious flaws in case of Meyer/Moran.
If you have anything comparable wrt papers Reiss used in his meta-analysis, please be specific.

Up to now i agree that the paragraph mentioning Krumbholz´s article was misleading/wrong, because it does not support Kunchur´s conclusion.

Quote
This description is just as selective as you accuse mine to be. ;)

At least slightly less as i added some points to yours.:)

Quote
You typically made this look as if you were saying that some material may not offer such a perceivable difference, in other words you have no guarantee that every hi res file is superior to CD quality.

That is plain wrong, as direct quotes from various posts on this topic could certify.
Please refrain from these eristic .

Quote
I mentioned this only because you brought up the topic.

In fact you brought up this topic in your blog.

Quote
I don't say that Reiss' paper is tainted by poor reviewing. It is not the reviewers who are responsible for Reiss' faults, and neither are they responsible for the faults in other papers. So why should I have mentioned it in the context of M&M, if the connection is coincidental rather than causal?

First, you haven´t so far presented any flaw in Reiss´s meta-analysis.
Second, you mentioned that the AES (Journal) has problems with the quality of the review process and you did it wrt to Reiss´s article. Maybe you meant it not exclusively; i could assume that the paper from Jackson et al. was also affected.

Quote
The problem is that such sloppy reviewing reduces the benefit of having a review in the first place. The designation of a paper as being "peer reviewed" may not mean much anymore.

Of course. But, the critic should provide some real flaws and should not only targeting publications with results the critic does not like .
Quote
I don't mix it up, I put it into perspective.

Sorry, but no.
You accused Reiss in a specific point, i showed that you wasn´t right and instead of admitting it, you were rasing another point.

Quote
I am more critical of his usage of Kunchur's works as support for the suggestion that humans have a monaural temporal timing resolution of 5 µs. He uses language that keeps him neutral regarding these claims, but his presentation ignores all criticism that has been voiced. I don't think that's OK. Uncritical mentioning of dubious references increases their perceived credibility, without adding any argument or evidence in their favor.

Kunchur got a lot of unfair criticism.
Afair i criticized his articles because the test results might have based on spectral cues (although below the usual limit) and because his conclusion about the need for higher sampling rates were not warranted. 
As Reiss was "neutral" in his language use, (he used "suggested" instead of "showed" ) that makes clear that research is still going on.



Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-20 16:02:17
Beside the central point that the two formats can be differentiated, further research is recommended (and needed).
Right, the 4x price of Hi-Re$ has always "differentiated" it form "standard" audio. Zero research is needed there.
If you are referring to Reiss's "unknown reasons", then yes you believers/peddlers owe us some more research. It's been 20+ years, where is it?
How is paying 4x price for "unknown reasons" an "advantage" as Reiss speciously claims? What advantage???

That´s why i wrote "no thorough measurment" . I don´t know what other defects they might have not noticed
To not provide thorough measurements before and in between is the flaw.
Agreed, so let's see the measurements for every test Reiss picked, particularly the BS test, using direct radiator beryllium dome tweeters driven to very high levels. Jakob2, all associated measurements now please.
 
Up to now i agree that the paragraph mentioning Krumbholz´s article was misleading/wrong, because it does not support Kunchur´s conclusion.
Like you support his conclusions (http://www.diyaudio.com/forums/lounge/280626-worlds-best-dacs-109.html#post4550224) . Well that puts you at odds with Krumbholz too.


No they (M&M) didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.

Please refrain from using Schopenhauer´s list.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-20 18:30:03
Seems that i´m not alone .....  :)
It seems you are getting Schopenhauer-obsessed. The more you accuse me, the more you are guilty yourself. ;)

Quote
Especially when reading the entire press release, the message is quite different. Beside the central point that the two formats can be differentiated, further research is recommended (and needed).
You clearly have your own way of reading, which differs from most everyone else. I can't remember anyone referring to the press release having put the request for more research in front as much as you do. I'm afraid, I won't follow your spin here.

I am quite convinced that Reiss knows enough about the marketplace, and the audiophile scene, to guess at the kind of reaction he would get to the press release, and I believe that his choice of prose was a conscious and deliberate attempt at evoking this kind of response. I simply can't bring myself to believe that he conveyed this message inadvertently. He simply presses too many audiophile buttons.

And regarding the request for more research, please give me a break. Firstly, after 20 years of failure to come up with convincing research, this can't remove the great embarrassment of the hires lobby. And secondly, a request for more research is a zero content set phrase that can be added to any research paper. It can be read as: "Please give me another research grant". There's no point in reading anything particular into it.

Quote
It would be just fair to adress the same "good will" to Reiss´s work.
I assume that at first, when looking at a matter; it was no different here. However, I reserve the right to make my judgments after having read the material, and I don't see why I shouldn't let you know the result.

From the paper alone, I would still assume Reiss good will, even though there are a number of hints showing his bias. The press release, however, makes it quite clear, and leaves little doubt.

Quote
I haven´t found any information if any of those "audiophiles claiming outrageous things" were participating in the tests.
Meyer&Moran confirmed the audiophile claims wrt to the sound quality of the "hi res material" (obviously they and most of the supporters of their experiment had no problems with this categorical statement although no controlled test was used) and the methodogical flaws prevent further conclusions.
I have no problem with their statement, either. They refer to the material as hires because everyone else at that time did. They didn't see it as their job to change the terminology. They wouldn't have escaped this sort of criticism anyway.

It doesn't matter if the audiophiles who claimed audibility were all participating or not. One of those who spurred the motivation for the test was Stuart, and he didn't participate nor would I have expected him to participate. He certainly knows that he can't win anything by participating.

This criticism you aim at M&M is both petty and beside the point. The reason why this test is drawing so much vitriol has nothing to do with its flaws, but with its success: It splinters the audiophile apologists into several different camps according to the reasons they proffer for the failure of the test to confirm their claims. Those who critcise that the material wasn't really hires implicitly accept that most of the material that is being sold as hires actually isn't. In other words they admit that the hires movement is riddled with fraud. And those who accept the material as legitimate hires have to scrape together all sorts of phony reasons why the test was unable to reveal something that should have been obvious. Either way is an embarrassment.

Quote
Which in other words means that the validity of this research approach depends
on the result? Worth to reconsider ....
Well, yes, it most certainly does. That's not at all surprising or nefarious.

Take an election for an analogy. If after an election, you find procedural faults, which is quite common, you wouldn't call the election invalid unless the procedural faults had the potential to alter the outcome of the election. The court that is called to assess the complaints routinely tries to work out how much an election result could have been distorted by the faults. If the magnitude of the distortion wouldn't have allowed a changed result, i.e. a different winner or a different number of seats for each party, then the fault is typically considered benign and the election result is upheld.

Coming back to our case, it means that if you have a flaw that may lead to false positives, and the result of the study was that the null hypothesis couldn't be rejected, then the flaw can be regarded as benign and the result is upheld. If on the other hand the result was that the null hypothesis was rejected, such a flaw may have the potential to change the result and can't be ignored. In this sense, the impact of a flaw on the validity of the research does indeed depend on the result.

It is quite obvious, really.

Quote
One should normally wish to know also in the case of a retained null hypothesis, especially if an experiments (like Meyer&Moran apparently did) compares "apples with oranges" too and lacks the advance of an extended data pool.
Sorry I can't follow you here. You're saying that you should wish to know which of the factors were responsible for non-audibility? Well, all of them I would have thought.

Well, you know, it isn't all that difficult. If there wasn't an audible difference, and the test concludes that the null hypothesis couldn't be rejected, everything is in agreement and there really isn't much point in all that huffing and puffing.

Quote
This conclusion is not warranted; it could be if all effects were balanced and if the sample size were sufficient, but both points were most probably not given.
I don't understand. Are you saying that each factor alone could have been significant but not both together? Sorry, but this hair-splitting sophistry would best be replaced by a "proper" listening test that avoids the alleged flaws.

Quote
As nobody noted the specific material used in the trials, how should one know? The number of trials per listeners was obviously to small, so missing a perceivable difference was quite likely.
How did you calculate "quite likely"? ;)

Quote
The vague definition is not their fault, letting the vagueness influence the results.... is.....
I don't believe the supposition that it did influence the result. Some people seem to take that as a given, but as usual without giving any evidence. Again: The right way of settling this is with an improved test. I know which way I would bet.

Quote
Please keep in mind that your consideration is based on pure anecdotical description. That´s why i wrote "no thorough measurment" .
I don´t know what other defects they might have not noticed, beside the problem with low level linearity.
I have never ever seen a research paper that presented all such measurements and other information. You are trying to put the bar so high that nobody reaches it anymore. Had this been the criteria for Reiss for including or rejecting a paper the set of papers would have been empty.

Your intellectual dishonesty is quite apparent here.

Quote
If you have anything comparable wrt papers Reiss used in his meta-analysis, please be specific.
The choice of papers represents a much more varied set of methodological choices that the variations in M&M. That is more than obvious. If this is a flaw with M&M, it is a disaster with Reiss. Isn't that sufficient already?

Quote
Up to now i agree that the paragraph mentioning Krumbholz´s article was misleading/wrong, because it does not support Kunchur´s conclusion.
Ah, good! I'm quite surprised you admit this, giving your past record. Now, if I only could get you to admit that Kunchur's conclusion isn't even supported by his own research.

Quote
First, you haven´t so far presented any flaw in Reiss´s meta-analysis.
My main criticism was with the discrepancy between the paper and the press release, this is true. I did and do have some criticisms of the paper itself, too, which you may not have noticed. I have some more which I haven't yet posted. Nevertheless, I am close to Archimago in my view of the paper, who agrees with the conclusion, provided it is read in the right way.

I may phrase it in a more pointed way than Archimago, though, when I state that the result should be regarded as another mosaic stone in the overall picture that shows convincingly that there's no point to HRA, no need and no benefit.

Quote
Second, you mentioned that the AES (Journal) has problems with the quality of the review process and you did it wrt to Reiss´s article. Maybe you meant it not exclusively; i could assume that the paper from Jackson et al. was also affected.
Jackson et.al. isn't a journal paper, it is a convention paper that has only been peer reviewed as a precis. I consider this to be quite a significant difference. Unfortunately, the difference tends to get missed by the public. The effect is even more damage to the concept of peer review. IMHO Jackson et.al. serves as a good example for the vulnerability of this system to abuse.

So, yes, Jackson somehow was affected, too, keeping sight of the differences. That's why I have the impression that the AES review process has developed a problem, which I wouldn't have diagnosed from a single event.

Quote
Kunchur got a lot of unfair criticism.
Undoubtedly. Like everybody else. That doesn't change the fact that there was a lot of factual and justified criticism. And, I also have to say that the way Kunchur dealt with it (or rather, didn't) did nothing to improve his standing.

Quote
Afair i criticized his articles because the test results might have based on spectral cues (although below the usual limit) and because his conclusion about the need for higher sampling rates were not warranted. 
I don't know whether or when you criticised it. I remember JJ raising this point very early on. I even hold that Kunchur's second experiment shows the opposite of what he concludes, so he seems to have disproven his point without realizing it.

Quote
As Reiss was "neutral" in his language use, (he used "suggested" instead of "showed" ) that makes clear that research is still going on.
The jury is out whether this is neutral language. This sort of language often gets used to suggest a biased view while avoiding to take side openly. It depends on the context if it really is neutral in spirit as much as in prose.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: 2Bdecided on 2016-07-21 00:31:59
I've done many experiments related to multidimensional perception in controlled listening experiments, and if the difference is often easy to accurately identify.  Based on your former comments, I don't think you have any hands-on experience in this area, nor can you cite any relevant authoritative publications to support your claims.
Those of us who do such tests find an interesting thing: there are differences so subtle that we're not sure we can hear them at all in normal sighted listening, but we do a controlled test, and statistically prove that we can hear them.

 "Beside "night and day differences" everything else is easily missed or imagined outside a controlled listening experiment, provided we are talking about multidimensional perception" ;)

Army makes a good point. You should try it properly. One instance of having a "night and day" difference melt away, and one of having a "subtle" different statistically proven, is enough to open most people's minds to the interesting nature of human perception.

Not everyone really wants to learn something though.

Cheers,
David.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-21 13:28:53
Beside the central point that the two formats can be differentiated, further research is recommended (and needed).

It turns out that all of the final selection of papers that the author used but one are fully and freely accessible to me due to AES membership.

This is a summary of the means by which the various papers provided audio to their listeners:

Plenge 1980 [59] Spectrally shaped 500 Hz impulses
Muraoka 1981 [35] Open Reel Tape of synthesized music
Oohashi 1991 [43] Unpublished Gamelan music of Bali B&K 7006 Analog recorder
Yoshikawa 1995 [67] Unpublished Popular Music Mitsubishi x 86HS Prodigi Digital recorder
Theiss 1997 [23] Unpublished recording made on Nagra D digital recording (24/96 Pioneer D9601 (16/96)  Material - 24 KHz bandlimited impulses, White Noise, Untitled recording of Brahms Piano Concerto
Nishiguchi 2003 [64, 68] Paper Inaccessible
Hamasaki 2004 [62, 64 Unpublished recordings recording via proprietary band splitting filters
Nishiguchi 2005 [58] Unpublished Recordings  made on  Magneto-Optical recorder
Repp 2006 [71] Unpublished Recordings  Made on computer running MOTU Digital Performer
Meyer 2007 [63]  DVD-A/SACD versus CD Recorder  ADA loop
Woszyck 2007 [69] Unpublished, live digital and analog  sources, student musicians
Pras 2010 [66] Unpublished, live digital and analog  sources, undisclosed mechanical synthesizer
King 2012 [72] Unpublished, live digital sources, Yamaha Disklavier player, undisclosed musical content
KanetadaA 2013 [24] Undisclosed sources, tech tests of ultrasonic content
KanetadaB 2013 [24] Undisclosed sources, tech tests of ultrasonic content
Jackson 2014 [11, 65] Ref 11 used  Downloaded 24/192 files proprietary processing, ref 65 not yet published
Mizumachi 2015 [70] T-TOC DATA COLLECTION VOL.2 (DATA DISC,192 kHz/24bits, 96 kHz/24bits, WAVE files)
Jackson 2016 [65] Not yet published

It turns out that only the Meyer and Moran paper  used SACD and/or DVD-A recordings as sources for their tests.  It is impossible that this study was significantly based on the two formats, since only one of the over a dozen papers the author put into his summary used them.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-21 18:31:12
<snip>

That's a good question. Are you just a believer, or do you have some skin in the Hi-Re$ game?

In fact, neither.... nor.

(Was it again a rhetorical question? Maybe you could use the mark "rq" or "no rq" to make things a bit easier)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-21 18:48:25
I'm down-grading my "layman's view" to complete waste of time just to keep this stuffed being talked about, which supports commercial interests."

Isn´t that just what you want to believe? (for whatever reason)

Quote
Speaking of which, not that I am suggesting that this is one of those red-wine-is-good-for-you jobs, but was this work funded? Do people get paid for doing this stuff?

That could quite easy led to a thought-terminating cliché. I don´t know if Reiss´s meta-analysis was specially funded, but universities are encouraged to cooperate with the industry (if not hardpressed) and that is of course a two edged sword.
It may give reason for concerns but should not used to dismiss result that one doesn´t like.

Afair most of the technology (not only) in the audio field was/is invented due to strong financial interest. Did somebody say "CD", "MP3" "two channel stereo" ? :)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-21 19:24:24
<snip>
And yet rather easy for the eager consumer or Stereophile reviewer to detect, eh? 

I don´t know, but it might be...

Quote
Quote
That is the reason why it is mandatory to incorporate positive controls (for other reasons negative controls as well), and any experimenter should think about listener accomodation and even training.


You're really not getting the point .

I am afraid, but it is precisely one of the important points.
If you are not able to check their (means your eager consuments or Stereophile reviewer) detection abilities within their normal listening routine, you have to consider that any test regime will have an impact on the listeners. It is just a question of internal validity.

Quote
Be more pedantic why don't you?  OK then, the experiments that used ostensibly 'hi rez' sources and compared them to standard rate audio.

The ITU-R BS.1116-x emphasizes for good reasons listener training and the use of positive controls. It needs more than using "ostensibly" or real "hi-res material" to ensure a sound experiment.

Quote
<snip>It's especially notable that the hi rez cheerleading side haven't provided any such experiments, given that they seized on Type II errors as their savior back in the day and will do so again per Reiss.

They should have done more, but does that help to correct the flaws of Meyer/Moran?

Quote
  But the again, the most vocal of them were also opposed to blind testing , period, for the longest time.  I'm sure now they'll be OK with it now though, since all (be sure to check me on that) of the work he cites used blind protocols of some type. .

Yeah that might come as a surprise, but it is surely not Reiss´s fault. And I´m sure you have already noticed similar behaviour within the "non believer camp" ......

Quote
(But then *again*, IIRC Reiss did mention the 'cognitive load' line with a straight face.....

To be fair- he just reported one concern from one of the references, did some research and reported that no such effect was shown by the data. Just exactly the correct procedure to handle something like that.

Quote
-- an other effect that seems to operate only when doing comparison under experimental conditions, rather than when carefully auditioning a new SACD or DVDA or HDtracks download for review --  so you never know. )

Please, reread the ITU-R BS.1116-x again to see what they had to say about accomodation and training. And that is just a short summary (as the authors frankly wrote in the recommendation) and experimenters are encouraged to ask experts and use the plethora of literature about DOE, sensory testing and coginitive psychology as well.

Quote
Who cares?

Everybody interested in good scientific practice should care....

Quote
M&M took these people at their word.  And *after* that, these people picked up the goalpost and marched downfield, proclaiming that only pure hi rez would suffice for proof, thanks very much

Does questionable critique really invalidate the justified critique?

Every experimenter is responisble for the implementation of the scientific requirements. It simply does´t help to argue that others did something wrong....
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-21 21:04:47
That's a good question. Are you just a believer, or do you have some skin in the Hi-Re$ game?

In fact, neither.... nor. Was it again a rhetorical question?
The believer part yes, undoubtedly. So no financial interests (or admission of) in Hi-Re$ as well?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-21 21:16:59
If you are not able to check their (means your eager consuments or Stereophile reviewer) detection abilities within their normal listening routine, you have to consider that any test regime will have an impact on the listeners.
Yes, Delusion Blocked Testing will certainly impact the believer. Tremendously and often very humorously. Who is questioning this?

The ITU-R BS.1116-x emphasizes for good reasons listener training and the use of positive controls.
All these studies are being triggered by believer "observations" about Hi-Re$. What any controls are being used?
Jakob2, I've asked multiple times, what positive control(s) must be used? Be specific (or do the dodge/evade/dance routine...which provides the answer).

And I´m sure you have already noticed similar behaviour within the "non believer camp" ......
No, I haven't seen rational folks saying blind tests are absolutely worthless...except when they appear to support a belief. No similarity whatsoever.

Please, reread the ITU-R BS.1116-x again to see what they had to say about accomodation and training.
Please reread Reiss's conclusions about "unknown reasons", then tell us exactly what ITU-R BS.1116 positive controls/training is to be used.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-22 09:16:31
Jakob2, I've asked multiple times, what positive control(s) must be used? Be specific (or do the dodge/evade/dance routine...which provides the answer).
I believe he's acting deliberately thick here. He would never ever think of coming up with a test design of his own and put it into practice. He wouldn't be able to reach the bar he's putting up for others, and he would expose himself to the kind of criticism he directs at others.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-22 11:23:33
I believe he's acting deliberately thick here.
It's no act. He's a believer, in denial.

He would never ever think of coming up with a test design of his own and put it into practice. He wouldn't be able to reach the bar he's putting up for others, and he would expose himself to the kind of criticism he directs at others.
He's regurgitating the same old believer BS arguments, including about positive controls. Lack of critical thinking skills is always their downfall. Now the cards have been called. What "positive controls", when even Reiss admits the "something" detected is for "unknown reasons". Love to know how one trains for that.

cheers

AJ
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-22 12:19:55

Quote
(But then *again*, IIRC Reiss did mention the 'cognitive load' line with a straight face.....

To be fair- he just reported one concern from one of the references, did some research and reported that no such effect was shown by the data. Just exactly the correct procedure to handle something like that.

Classic case of reading what one want to read, not what was written:

"There is considerable debate regarding preferred methodologies
for high resolution audio perceptual evaluation. Authors
have noted that ABX tests have a high cognitive load
[11], which might lead to false negatives (Type II errors). An
alternative, 1IFC Same-different tasks, was used in many
tests. In these situations, subjects are presented with a pair
of stimuli on each trial, with half the trials containing a pair
that is the same and the other half with a pair that is different.
Subjects must decide whether the pair represents the same
or different stimuli. This test is known to be “particularly
prone to the effects of bias [79].” A test subject may have a
tendency towards one answer, and this tendency may even
be prevalent among subjects. In particular, a subtle difference
may be perceived but still identified as ‘same,” biasing
this approach towards false negatives as well.

We performed subgroup tests to evaluate whether there
are significant differences between those studies where subjects
performed a 1 interval forced choice “same/different”
test, and those where subjects had to choose among two alternatives
(ABX, AXY, or XY “preference” or “quality”).

For same/different tests, heterogeneity test gave I2 = 67%
and p = 0.003, whereas I2 = 43% and p = 0.08 for ABX
and variants, thus suggesting that both subgroups contain
diverse sets of studies (note that this test has low power,
and so more importance is given to the I2 value than the p
value, and typically, α is set to 0.1 [77]).

A slightly higher overall effect was found for ABX, 0.05
compared to 0.02, but with confidence intervals overlapping
those of the 1IFC “same/different” subgroup. If methodology
has an effect, it is likely overshadowed by other differences
between studies.
"

His data showed "...a slightly higher overall effect was found for ABX".

He then admitted what was already known to many readers by other means, which is that there were other far more important differences among the studies. That is a clear fault of his meta study which lumps together studies that are so different that their results should never ever be lumped together as he did.

He failed to report the well-known confusion related to which of the extant two very different ABX tests were used (ABX1950 versus ABX1982) , and that comparing ABX tests to 1IFC tests makes no sense because ABX1982 tests can be and often are  performed as 1IFC tests.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-22 13:13:42
He failed to report the well-known confusion related to which of the extant two very different ABX tests were used (ABX1950 versus ABX1982) , and that comparing ABX tests to 1IFC tests makes no sense because ABX1982 tests can be and often are  performed as 1IFC tests.
This is one of the strong hints of Reiss' own bias IMHO. The way he "reports" is quite selective and certainly not neutral, insofar I can't agree with Jakob1863 that he used the "correct procedure":

To be fair- he just reported one concern from one of the references, did some research and reported that no such effect was shown by the data. Just exactly the correct procedure to handle something like that.
Given that this matter was discussed on the AES web page for the paper, it is quite inexplicable how he could have missed this, particularly since he gives the web link in his paper. He also seems to have missed that his reference was speculation with no evidence given. His text gives it a factuality that simply isn't there:
Quote from: Reiss
Authors have noted that ABX tests have a high cognitive load [11], which might lead to false negatives (Type II errors).
This sentence indicates to someone who doesn't bother to read the referenced paper, and the ensuing discussion on the web page, that the high cognitive load of an ABX test can be regarded as a fact, and that the false negatives are a possibility.

In reality, when reading the referenced paper and its discussion, it looks rather like the cognitive load argument refers to a very old form of ABX that is irrelevant for either Reiss' study or the referenced paper, and that it is a speculation for which no evidence is given. Moreover, one finds that the referenced paper [11] uses this speculation to cast doubt on the M&M study, where the old form of ABX clearly wasn't used.

This doesn't constitute a responsible way of using a reference. It rather continues the unfair way in which Jackson et.al. have criticised M&M.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Thad E Ginathom on 2016-07-22 21:25:42
I'm down-grading my "layman's view" to complete waste of time just to keep this stuffed being talked about, which supports commercial interests."

Isn´t that just what you want to believe? (for whatever reason)
Yes... But nothing said here inclines to to think any other way

Quote
Speaking of which, not that I am suggesting that this is one of those red-wine-is-good-for-you jobs, but was this work funded? Do people get paid for doing this stuff?

That could quite easy led to a thought-terminating cliché. I don´t know if Reiss´s meta-analysis was specially funded, but universities are encouraged to cooperate with the industry (if not hardpressed) and that is of course a two edged sword.
It may give reason for concerns but should not used to dismiss result that one doesn´t like.

Afair most of the technology (not only) in the audio field was/is invented due to strong financial interest. Did somebody say "CD", "MP3" "two channel stereo" ? :)

Ahhh... real products; real developments; real research. Not to be confused with a rehashed repeat job glorified by the name "meta-analysis."

Good grief, what next? High Resolution analysis?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 11:49:28
<snip>
If one understands how experiments and statistics work, one would not say that a test with 60% correct responses is the same as  "distinguish(-ing) between the two formats around sixty percent of the time.”

The first thing to remember is that in an experiment with 50% correct responses and 2 alternatives indicates 100% random guessing.

In contrast, an experiment with 100% correct responses indicates 0% random guessing.

"distinguishing" was indeed an unfortunate (or even misleading) wording.
But to add a contrast, as we are quite often looking for two sided alternative hypothesis, an experiment with 0% correct responses indicates 0% random guessing too.

Quote
The thing to take away from examination of these two facts is that there is a non-linear sliding scale of actual percentage of correct identifications ranging from 0% to 100% while distinguishing between the two alternatives from 50%  to 100% of the time.

The added remark leads to an extension of this model.

Quote
Listeners who get correct responses only 60% of the time are mostly guessing randomly. Only 1 in 6 responses is anything but random guessing. They are actually giving correct responses only 16% of the time. The rest of the time they are guessing randomly.

What Reiss reported was an estimate for the underlying population parameter with medium (afair) variance,which means that various individuals form this population might do much better and worse.

Quote
If you try to apply these results to the real world, the results might be very depressing. For example If you buy 6 high resolution recordings, based on these statistics, only one of the recordings will sound different or they will all sound different only 1/6 of the time or something like that. I don't foresee anybody selling a lot of recordings based on performance like that.

That is imo unfortunate wording too. :)

But as said before (i think),i don´t understand the "hysteric" critique (related to mass market influences).
Given all the expericences with marketing and double blind studies, despite may be a small number of people, consumers will not buy more "hi res" material due to the results of a meta-analysis.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-25 12:09:15
<snip>
If one understands how experiments and statistics work, one would not say that a test with 60% correct responses is the same as  "distinguish(-ing) between the two formats around sixty percent of the time.”

The first thing to remember is that in an experiment with 50% correct responses and 2 alternatives indicates 100% random guessing.

In contrast, an experiment with 100% correct responses indicates 0% random guessing.

an experiment with 0% correct responses indicates 0% random guessing too.

Straw man argument, since any 2AFC experiment with 0% correct responses isn't an experiment at all, it is a botched mess. Time to diagnose and correct the experiment.  Thinking that the results of botched messes must  have some higher meaning than human error is just wishful thinking. 

Of course since we're often dealing with placebophiles, and wishful thinking  is their spiritual guide,  GIGO.



Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 12:14:33
<snip>
Those of us who do such tests find an interesting thing: there are differences so subtle that we're not sure we can hear them at all in normal sighted listening, but we do a controlled test, and statistically prove that we can hear them.

Which is correct but does in no way contradict what i have expressed. ;)

Quote
Army makes a good point. You should try it properly. One instance of having a "night and day" difference melt away, and one of having a "subtle" different statistically proven, is enough to open most people's minds to the interesting nature of human perception.

To be honest, Arny makes his usual remarks.
In fact i´ve told him already (over at diyaudio) that i´ve started with controlled listening test back in the beginning (or mid) 80s after reading some articles from Dan Shanefield. His arguments were convincing and so it went along.
Being the Arny that he is, he commented that i was "late to the party" as his highness began a couple of years earlier. :)
In that point he was absolute right although is it relevant? :)

Quote
Not everyone really wants to learn something though.

That is true . ;)

P.S. pelmazo and ajinfla know about my information wrt controlled listening too....

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 12:30:08
<snip>
Straw man argument, since any 2AFC experiment with 0% correct responses isn't an experiment at all, it is a botched mess.

Which constitutes a straw man argument itself, just because 2AFC in "preference mode" were two sided tests and as their is no correct or wrong answer (per definition), the nullhypothesis remain at p = 0.5, but H1 is p <> 0.5 .

Quote
Time to diagnose and correct the experiment. 

Would be the best, but analysises of listening tests show that a lot is missing.

Quote
Thinking that the results of botched messes must  have some higher meaning than human error is just wishful thinking. 
Of course since we're often dealing with placebophiles, and wishful thinking  is their spiritual guide,  GIGO.

Or it is quite often a group of "sciencefools" that praise seriously flawed experiments like Meyer/Moran (for example).

Just two sides of the same coin.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 12:37:53
In fact, neither.... nor. Was it again a rhetorical question?
The believer part yes, undoubtedly. So no financial interests (or admission of) in Hi-Re$ as well?
[/quote]

So having missed the rhetorical nature, i´ve answered it; are you able to change your belief?

And still no to any financial interest (or admission of) in "hi res" . :)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 12:53:53
<snip>
Yes, Delusion Blocked Testing will certainly impact the believer. Tremendously and often very humorously. Who is questioning this?

Which is funny, but you purposefully missed the point. Using eristics will not help in finding the truth, although it helps a lot in self- immunization

Quote
Jakob2, I've asked multiple times, what positive control(s) must be used? Be specific (or do the dodge/evade/dance routine...which provides the answer).

First of all i like to cite JJ on this "do you have to use controls? Only, if you want to know if your test is good" . And jj only reiterates something that is part of the scientific requirements.
And it is solely the responsibility of any experimenter to choose and use appropriate controls. It will not help to shout "but Jakob1863 did not...."

A difference which is known to be audible constitutes a positive control. The appropriateness of a control depends on the hypothesis under research.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-25 13:44:41
<snip>
Those of us who do such tests find an interesting thing: there are differences so subtle that we're not sure we can hear them at all in normal sighted listening, but we do a controlled test, and statistically prove that we can hear them.

Which is correct but does in no way contradict what i have expressed. ;)

If you say so...

Quote from: 2Bdecided
Army makes a good point. You should try it properly. One instance of having a "night and day" difference melt away, and one of having a "subtle" different statistically proven, is enough to open most people's minds to the interesting nature of human perception.

Quote from: Jakob1863
To be honest, Arny makes his usual remarks.

My usual remarks relate to the science of listening tests, which has been augmented in some areas especially related to quantifying degree of impairment, but remains much the same when i comes to "can hear"/"can't hear" type testing.  Since most people are interested in the latter, of course the comments basically remain pretty much the same.  There have been important changes that have been reflected in my comments, such as listening tests by means of file comparison (e.g. FOOBAR), but apparently that is too sophisticated for some to comprehend and value.

Quote
In fact i´ve told him already (over at diyaudio) that i´ve started with controlled listening test back in the beginning (or mid) 80s after reading some articles from Dan Shanefield. His arguments were convincing and so it went along.

Being the Arny that he is, he commented that i was "late to the party" as his highness began a couple of years earlier. :)

It's interesting to see the truth bent to fit into a personal agenda.  Fact is that the SMWTMS group was routinely doing DBTs in 1977 which is almost a decade earlier than the "mid 80s".

That's also 3 years or more earlier than Dan Shanefield's "Ego Cruncher..." article about frequency response matched DBTs in Mar 1980 High Fidelity. An interesting factoid - Dan Shanefield wrote a similar article that was published in Stereo Review magazine around the same time, under a pseudonym.  These were both foreshadowed by Dan's BAS Speakers DBT-related  articles in 1974 and 1976.  Shanefield helped us develop the commercial ABX Comparator. His earlier comments were some of the stimulus and guidance for our work.

Jakob1863  reveals that he's got a personal axe to grind, facts be damned.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Jakob1863 on 2016-07-25 14:36:50
<snip>
My usual remarks relate to the science of listening tests,.......

Please reread 2bedecided´s post and his citation again, i hope you then get a better idea what i was referring to...

Quote
Quote
In fact i´ve told him already (over at diyaudio) that i´ve started with controlled listening test back in the beginning (or mid) 80s after reading some articles from Dan Shanefield. His arguments were convincing and so it went along.

Being the Arny that he is, he commented that i was "late to the party" as his highness began a couple of years earlier. :)

It's interesting to see the truth bent to fit into a personal agenda.  Fact is that the SMWTMS group was routinely doing DBTs in 1977 which is almost a decade earlier than the "mid 80s".

Which facts were bent? Please be specific.
Facts i´ve written about were:
- i told you over at diyaudio that i/we started with controlled listening tests back in ~80 - ~85 after reading some articles of Dan Shanefield

- you told me that i was late to the party as Shanefield wrote articles already earlier and you did your first ABX in 1977

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-25 16:22:27
And still no to any financial interest (or admission of) in "hi res" . :)
Well that leaves you as a denialist, of both believer and peddler variety.

Using eristics will not help in finding the truth, although it helps a lot in self- immunization
The "truth" as you believe, as a believer. Fact is the Hi-Re$ scam was long ago exposed by M&M, which is the source of your angst.
They upended your belief and certainly didn't help in the peddling of the scam either. The garbage like Kunchur and BS filters that you believe in may help you self immunize, but scientifically, they are still garbage.

A difference which is known to be audible constitutes a positive control.
Right, so for detection of Hi-Re$, which Reiss says is "important", what would that be. He had no clue, said unambiguously, "unknown reasons". You don't know either, equally clueless. But par for the believer course. Or peddler.
Now put on your dance shoes Fred Astair and evade the question once more.
We enjoy it. ;)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-25 18:06:01
Given all the expericences with marketing and double blind studies, despite may be a small number of people, consumers will not buy more "hi res" material due to the results of a meta-analysis.
Perhaps you're right there. Rather than spending some money on a decent blind study that improves on M&M's test design, the main "players" in the commercialization of hires (I'm not even singling out Meridian, there are bigger ones like Sony or Harman) spend much more money on marketing of the dumbest kind. Clearly, they know their customer.

I wonder whether they even funded the one-man meta study we're talking about here. That sort of study is one of the cheapest you can get, still I fear it might have been funded mostly by the taxpayer.

I think the "executives" of the large audio firms aren't dumb. If they thought that a new and improved study would work for them, it would have long been done. They rely on marketing instead, which is probably the best available option for them: They're having elderly artists advocate hires, finally a technology where they can get their true message across, something they have been denied throughout their entire career, by reckless industry bosses with their fat cigars, and soulless engineers in their white lab coats, neither of which know what it means to listen. So heartwarming. So liberating. So righteous. So true.

Such bullshit.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2016-07-25 19:00:14
Given all the expericences with marketing and double blind studies, despite may be a small number of people, consumers will not buy more "hi res" material due to the results of a meta-analysis.
Perhaps you're right there. Rather than spending some money on a decent blind study that improves on M&M's test design, the main "players" in the commercialization of hires (I'm not even singling out Meridian, there are bigger ones like Sony or Harman) spend much more money on marketing of the dumbest kind. Clearly, they know their customer.

Harman/JBL  folks have at least published good *research* that benefits home audio.  The marketing arms notwithstanding.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-25 19:07:46
Harman/JBL  folks have at least published good *research* that benefits home audio.  The marketing arms notwithstanding.
Sure. They would be in a perfect position to carry out such a study. They have the means, the gear and the people to do a good job on this. If they saw a good chance that this would bolster their market position, they would certainly already have done it.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-25 19:26:21
Maybe Jakob2 et al need Donald Rumsfeld to explain how to make "Unknown unknowns" into "Known unknowns"??
Perhaps commission a 20 year meta study to figure it all out, so we can finally have what specific positive control is to be used.
Or just keep dancing.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-07-25 19:44:22
Maybe Jakob2 et al need Donald Rumsfeld to explain how to make "Unknown unknowns" into "Known unknowns"??
Perhaps commission a 20 year meta study to figure it all out, so we can finally have what specific positive control is to be used.
Or just keep dancing.
We don't need Rumsfeld. Since Reiss, we know the unknowns. The unknown reasons. Well, sorry, I mean we know that the reasons are unknown. Previously we thought we didn't need any reasons, because we didn't know there was anything in need of a reason. That was the unknown unknowns state. Now post-Reiss we're in the known unknowns state. :)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2016-07-25 20:45:19
Maybe Jakob2 et al need Donald Rumsfeld to explain how to make "Unknown unknowns" into "Known unknowns"??
Perhaps commission a 20 year meta study to figure it all out, so we can finally have what specific positive control is to be used.
Or just keep dancing.

In would think that training would involve starting by comparing a 'strong' version of the impairment, with no impairment, then gradually reducing the level of the impairment until no difference could be heard.

Going forward from that to actual tests, the positive control then becomes the impairment at a low level that was still reliably heard during training.

For hi rez vs redbook....first one has to decide if bit depth or sample rate are what's going to be tested. 

 
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-25 21:07:23
<snip>
My usual remarks relate to the science of listening tests,.......

Please reread 2bedecided´s post and his citation again, i hope you then get a better idea what i was referring to...

I get this feeling that you are just giving me the run-around. For example, you don't give a link to the post you seem to be referrring to, so if I make a comment you can always call me stupid for responding to the wrong post. 

The post that seems to fit best because it seems to be 2BDecided's morst recent pot to you is https://hydrogenaud.io/index.php/topic,112204.msg925543.html#msg925543 (https://hydrogenaud.io/index.php/topic,112204.msg925543.html#msg925543)

However it is a post that is highly favorable to ABX,, and can even be interpreted as touting the open-mindedness of ABX testers such as myself.   For example:

Quote from: 2BDecided
Army makes a good point. You should try it properly. One instance of having a "night and day" difference melt away, and one of having a "subtle" different statistically proven, is enough to open most people's minds to the interesting nature of human perception

About your alleged posts responses to me on to http://www.diyaudio.com/. I searched for your account there, and it keeps coming back unknown. This sheds an unfavorable light on your credibility.

In the process of googling on your account name I found some of your posts to http://www.hifi-forum.de  which seem to show that you are also banned there.

Why are you heaping favor on ABX testing and me personally and attacking your own credibility?


Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-25 21:15:27
In would think that training would involve starting by comparing a 'strong' version of the impairment, with no impairment, then gradually reducing the level of the impairment until no difference could be heard.

That's the approach I used at my PCABX web site.

Here's some of the text related to listener training from that site:
Quote from: PCABX web site
How To Train Yourself To Be A Sensitive, Reliable Listener

The purpose of this page is to provide listening comparisons of various kinds and increasing difficulty for the purpose of listener training. A key component of the training program is the PCABX Comparator which you can download by left-clicking here.

If you left-click here, you will find an "AES 20" form that may help you quantify sound quality.

Please start at the upper left hand corner of the table at the bottom of this page and work down the column. Once you have finished a column, move to the top of the next column to the right.

Each "Training Session" relates to a kind of audible difference. Each "Training Session" is a column in the table.

Each "Training Session" is a column of of  tests ranging in difficulty to hear reliably from "Very easy" to "Might be Impossible". Each test is composed of a pair of "Reference" and "Test" samples. You need to download, compare and reliably identify each pair using the PCABX Comparator (which you can download by clicking here).

For best results start at the top of each column of tests, and download the first pair of "Reference" and "Test" samples. Then listen and learn to reliably detect the difference between the samples using the PCABX Comparator. Your goal should be to obtain 1% or less "Probability You Were Guessing" , as calculated by the PCABX Comparator. More specifically, you should try for 14 correct answers out of 16 trials.  Once you have achieved 1% or less "Probability You Were Guessing", move down the column to the next pair of files.
If you run into extreme difficulty with samples rated "Difficult" or harder, please feel free to move to he next "Training Session" which starts at the top of the next  column to the right.

If you have difficulty completing any samples rated "Difficult" or easier, please consider upgrading your playback system including loudspeakers, sound card, amplifier and listening environment. Please see  the sidebar titled  "What Makes A Good Sound System For PCABX?".

Remember that as you approach your personal limit of audibility, you probably won't hear a distinct difference. As you approach this point shut your eyes, control the PCABX Comparator with the "A", "B", and "B" keys on the keyboard, and imagine that you are hearing a difference. You will probably be successful for at least one more level of difficulty.

Menus of various kinds of artifacts as applied to the same sound samples in various degrees from obvious and easy to difficult or impossible were provided.


Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-07-25 21:20:08
Did you make sure to include various degrees of artifacts from the "unknown" category to serve as positive controls like what was done in the BS "typcial" filter study?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-25 23:03:22
Did you make sure to include various degrees of artifacts from the "unknown" category to serve as positive controls like what was done in the BS "typical" filter study?

Here's the Training section from the Jackson Capp and Stuart paper:

Quote from:  Jackson Capp and Stuart
The audibility of typical digital audio fi lters in
a high- delity playback system

Preliminary data and feedback suggested that some
time was required for listeners to become familiar
with the task and with the kind of listening required.
To this end, each listener was trained on the task in
several ways before the formal testing began.
In the rst phase of training, listeners were able to
listen to the whole piece of music (about 200 seconds) a number of times. They were encouraged
to pay attention to technical aspects such as musi
cal texture and playing technique, and also on more
qualitative aspects of listening such as the size and
location of the auditory image. Listeners could listen
to the piece as many times as they liked; in practice,
none listened more than twice.
The second phase of training was intended to familiarise listeners with the fi ltering used and with
using the GUI. Two intervals were presented, as for
the main test, but the first interval always contained
the un filtered extract and the second always contained the filtered extract; listeners were informed
of this, with the intention that labelling the extracts
as having been processed differently might aid the
identi fication of di differences. Listeners were able to
listen to as many labelled pairs of extracts as they
liked before progressing to the test. The lter used
here was an FIR lter with a frequency transition
band spanning 8{10 Hz. This lter was chosen as it
would have been straightforward for most listeners
to identify diff erences introduced by its application.

The third phase of training occurred before each
block of the test, where listeners had the chance
to hear the processing for that condition using the
paradigm for training phase 2, where the extracts
were known. This allowed listeners to become accustomed to each condition before it was tested. Listeners were not limited in the number of training
extracts they could hear for each test condition, but
the maximum that any listener chose to hear was
nine.

I don't see various degrees of artifacts from the "unknown" category  that serve as positive controls (training).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-07-25 23:07:13
Some would prefer these inconveniences go unnoticed.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-26 08:25:55
Some would prefer these inconveniences go unnoticed.

Words are cheap.

Good Science can be far more inconvenient to actually  do.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-26 11:24:54
Good Science can be far more inconvenient to actually  do.
Is that why it's taken Jakob2 et al 20+ years to come up with "unknown reasons" and "hey, I have no clue what unknown positive controls to use, but use them anyway for ITU training", so the "improvement" wrought by Hi-Re$ is realized?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: rrod on 2016-07-26 22:34:27
Side question: is it common for JAES papers with such far-reaching implications to get 2 comments of discussion on aes.org?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-07-27 00:15:58
Side question: is it common for JAES papers with such far-reaching implications to get 2 comments of discussion on aes.org?

I suspect that certain mills are grinding slowly but very finely.  Of course we have evidence that some JAES authors don't bother reading and heeding the comments, anyway.
Title: Re: A Meta-Analysis of High Resolution Audio Perceptual Evaluation
Post by: pelmazo on 2016-07-27 08:36:42
rrod, earlier this month you wrote that you would "work on something" regarding the statistical implications of several independent mechanisms whose interactions are unknown:
I'll work on something.
Are you still on it? Has something come out of this yet?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: rrod on 2016-07-27 17:36:08
rrod, earlier this month you wrote that you would "work on something" regarding the statistical implications of several independent mechanisms whose interactions are unknown:
I'll work on something.
Are you still on it? Has something come out of this yet?

Not yet. Part of the problem is that all the "bit" related studies are also all training-based. I'd need me some paper access at this point, but since I'm currently all-day care for a 3y/o, teasing out potential predictors from 80 papers will be especially slow grinding. This is why I wondered where some AES action on this might be, but I'll trust in Arny's guess that someone out there is crunching.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-07-27 21:58:40
I wonder if the Hi-Re$ peddlers will require "unknown causes" positive controls ITU training for all customers, before they sell Hi-Re$ files/origami and $20k Hi-Re$ bling players to them?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Joe Bloggs on 2016-08-07 19:04:59
So... 18 papers were taken into this meta-analysis.  Can someone point me to the part where each of these papers' methodologies and/or sponsor links were presumbly taken apart--given the high confidence that everyone here has that the meta-analysis is a meta-analysis of a pile of crap?

Not doubting the conclusion here, just wondering how it was arrived at :D  Especially since I'd basically brushed aside all the >50% result experiments as poorly conducted experiments sponsored by industry interests, and am now being asked to substantiate this claim :P

The only discussion I've been able to locate so far is the discussion on the 2014 Jackson paper: https://secure.aes.org/forum/pubs/conventions/?ID=416
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-07 19:20:44
So... 18 papers were taken into this meta-analysis.  Can someone point me to the part where each of these papers' methodologies and/or sponsor links were presumably taken apart.

I don't think such a thing exists. I did an analysis of just one aspect of the papers - how the source signals were created. This required that I download all of the papers cited in the article, and analyze each and every one.  This was eased by the fact that all but one of the papers were AES papers, and at the time I did the analysis they were freely accessible to AES members such as myself on the AES web site. I eventually obtained the non-AES paper and analyzed it as well.  The results of that analysis is summarized in an earler post.

Quote
The only discussion I've been able to locate so far is the discussion on the 2014 Jackson paper: https://secure.aes.org/forum/pubs/conventions/?ID=416

You will notice that I contributed some comments there. I feel like the only good that came from the work I did,  was that I learned what I learned.  There don't seem to be any Mea Culpas despite the  egregious faults that are clearly there.  I don't see where anybody on the other side took any of the criticisms seriously, especially given that some of them are repeated in the more recent paper that this thread has been discussing.

There seems to be a great divide within the AES among those who publish papers like these, and many who take the honors given to them as a sign that the AES has become something that is an embarrassment to them.



Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Joe Bloggs on 2016-08-07 20:01:53
I don't think such a thing exists. I did an analysis of just one aspect of the papers - how the source signals were created. This required that I download all of the papers cited in the article, and analyze each and every one.  This was eased by the fact that all but one of the papers were AES papers, and at the time I did the analysis they were freely accessible to AES members such as myself on the AES web site. I eventually obtained the non-AES paper and analyzed it as well.  The results of that analysis is summarized in an earler post.

Great--can you point me to it?  I've scanned several pages of this thread and failed to locate it  :-[
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-07 21:26:53
I don't think such a thing exists. I did an analysis of just one aspect of the papers - how the source signals were created. This required that I download all of the papers cited in the article, and analyze each and every one.  This was eased by the fact that all but one of the papers were AES papers, and at the time I did the analysis they were freely accessible to AES members such as myself on the AES web site. I eventually obtained the non-AES paper and analyzed it as well.  The results of that analysis is summarized in an earler post.

Great--can you point me to it?  I've scanned several pages of this thread and failed to locate it  :-[

https://hydrogenaud.io/index.php/topic,112204.msg925557.html#msg925557 (https://hydrogenaud.io/index.php/topic,112204.msg925557.html#msg925557)

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-08-08 16:29:13
Can someone point me to the part where each of these papers' methodologies and/or sponsor links were presumbly taken apart--given the high confidence that everyone here has that the meta-analysis is a meta-analysis of a pile of crap?
I didn't read the discussion this way, and didn't mean to say this personally, either. Some of the papers are highly suspect, others are reasonable. However, even a meta analysis of OK papers can be crap, if the papers are too diverse to be analyzed together.

Quote
Especially since I'd basically brushed aside all the >50% result experiments as poorly conducted experiments sponsored by industry interests, and am now being asked to substantiate this claim :P
Was it here (http://www.head-fi.org/t/815376/rising-cost-of-audiophile-equipment-and-importance-of-bias-blind-testing/720)?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-08 17:05:12
“Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content."

I see that Reiss's dishonesty is being avoided by placebophile apologists over there as well.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2016-08-08 17:36:50
So... 18 papers were taken into this meta-analysis.  Can someone point me to the part where each of these papers' methodologies and/or sponsor links were presumbly taken apart--given the high confidence that everyone here has that the meta-analysis is a meta-analysis of a pile of crap?

Not doubting the conclusion here, just wondering how it was arrived at :D  Especially since I'd basically brushed aside all the >50% result experiments as poorly conducted experiments sponsored by industry interests, and am now being asked to substantiate this claim :P

The only discussion I've been able to locate so far is the discussion on the 2014 Jackson paper: https://secure.aes.org/forum/pubs/conventions/?ID=416


Reiss says he contacted authors for each paper and obtained what raw data he could, including metadata that were not used in the original papers.

So there is really no way anyone can replicate his analysis without getting that data.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-08 17:55:33
The more important question would be whether anyone could replicate the actual hodge podge of tests.

Too much is being made of this "well done" undergraduate-level industry insider AES book report which does little more than continue to show that the push into hi-re$ ultimately rests on deceptive marketing ploys.

Quote from: Dr. Reiss's press release
Our study finds high-resolution audio has a small but important advantage in its quality of reproduction over standard audio content.

...and now some irony for our viewers:
[Meyer and Moran] didn´t issue a press release; instead they used sort of guerilla marketing in forums promoting their publication.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: xnor on 2016-08-11 21:24:15
Must watch:

(http://img.youtube.com/vi/42QuXLucH3Q/0.jpg)
--> Is Most Published Research Wrong? (https://www.youtube.com/watch?v=42QuXLucH3Q)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-11 22:25:17
Why must I know that the toothbrush I bought is "gluten free!"??? (http://www.ncbi.nlm.nih.gov/pubmed/23648697)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: xnor on 2016-08-12 15:27:38
I'm sorry to bother you, omniscient highness. I will send a hundred PMs to all the other people next time instead.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-12 15:41:51
You must not have seen any similarity between your post, my last post and the general topic.  I suppose mine may also only be regional.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-08-12 22:52:03
Ok, looks like AES is finally done with the website update, can now access papers selected by Reiss.
Started with the most recent non-BS-Under Review paper, by Mizumachi in 2015 Subjective Evaluation of In-vehicle High Resolution Audio.
Focusing on "trained" listeners (to hear what exactly?): 24/192 vs downsampled 16/48, 120 seconds music, 60 seconds silence, 120 sec music again. Order unclear (randomized???). In car system with woofer crossed to tweeter at 1k, supertweeters "added". At what frequency? Zero data there, or any form of distortion testing indicated. ::)
Looks like a whopping 76% "correct".
Same test, but now 24/192 vs 16/48 LAME converted to 320 MP3.
58% "correct".
WTF??
The trained listeners had a harder time with 320 MP3 vs 'Hi Rez" ?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-13 08:42:16
Why must I know that the toothbrush I bought is "gluten free!"??? (http://www.ncbi.nlm.nih.gov/pubmed/23648697)

There may be a similarity between gluten and the CD Audio format. Both are being indicted by unscientific and anti-scientific self-appointed authorities as being generally harmful. Yet, as a practical matter, both seem to be capable of  working  well, and both give most people a lot of enjoyment.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Porcus on 2016-08-13 13:52:13
I am not so sure that the number of CD-rez-intolerant earpairs who need the extra mbits/s to hear the light of day, is comparable to the number of gluten-intolerant consumers. But if you put an infinite number of monkey scientists on a deserted island with a population to test each, then sure as hell there will be nonzero correlation in at least one of the studies.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-16 01:57:52
Subjective Evaluation of In-vehicle High Resolution Audio.
I'm waiting for the paper showing a statistical analysis that supports the theory that in-vehicle hi-re$ audio affects a person's ability to drive safely.


...and the press release baldly claiming that hi-re$ audio provides small but important advantage in lessening your chances of getting into an automobile accident.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Chibisteven on 2016-08-16 12:13:18
Swing the balance control wildly remotely and that will distract most drivers more than something they can't even hear in the first place.

For added fun: Put out frequencies that attract all the bats in town to attack the car and that might just cause an accident assuming you find a way to reproduce it loud enough and speakers that handle them frequencies correctly and some kind of ultrasonic bat call that pisses them all off.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-08-16 14:46:50
...and the press release baldly claiming that hi-re$ audio provides small but important advantage in lessening your chances of getting into an automobile accident.
Something like this?
"Though the causes are still unknown and the effect is perhaps small and difficult to detect, the perceived fidelity of an audio recording and automobile playback chain is affected by taking a hypersonic bath while driving, leading to lower rates of accidents. Further, it has been found that training for these unknown causes has been shown to improve driving."

Any idea how to add a supertweeter to my car stereo like they had to, or why that would bring it closer to 320 mp3 SQ??

cheers,

AJ
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-28 15:53:14
Did you make sure to include various degrees of artifacts from the "unknown" category to serve as positive controls like what was done in the BS "typcial" filter study?

As I've been studying their training strategy, additional concerns about its relevance have arisen:

Just to review, the actual transition bands used during the "...Typical Audio Filters..." paper were:

"The frequencies of the transition bands were 23500-
24000 Hz and 21591-{22050 Hz, corresponding to the
standard sample rates of 48 kHz and 44.1 kHz re-
spectively.4 Fig. 2 shows the amplitude and energy
of the impulse response for the 48-kHz filter.

Doing the arithmetic, the experiment used transition bands that were 500 Hz wide, when actual typical audio filters used transition bands that are more like 2-3Kz.  The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).

However, things were even more asymmetrical with their listener training signals:

"
Listeners were able to
listen to as many labelled pairs of extracts as they
liked before progressing to the test. The filter used
here was an FIR filter with a frequency transition
band spanning 8-10 Hz. This filter was chosen as it
would have been straightforward for most listeners
to identify differences introduced by its application.
"

My early expectation was that the primary artifact due this very, very narrow transition band would be ringing, which relates to the experiment at hand. While I am continuing to study this problem (which includes teaching myself how to use Octave) my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.   The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

I agree that these differences wold be "...straightforward for most listeners to identify..."  but these are not in the frequencies anywhere near the high frequency extensions that are provided by the usual so-called high resolution formats.  They are in a mid band frequency region where even mediocre DACs are flat +/- 0.1 dB or better.  They are not artifacts of the kind we usually relate to digital filters. 

While the article says:

"The second phase of training was intended to fa-
miliarise listeners with the fi ltering used and with
using the GUI."

Thus, the training that the listeners received was not relevant to the artifacts that were being studied.  There was no training that related to the filtering that the article purported to study.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: soundping on 2016-08-29 03:00:32
Sounds like AES is just curating cash through membership fees.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-29 12:35:38
Doing the arithmetic, the experiment used transition bands that were 500 Hz wide, when actual typical audio filters used transition bands that are more like 2-3Kz.
One of the most common criticisms of the "...Typical Audio Filters..." paper, including one of yours, is that the filters are not really typical. This is a powerful criticism if one can validate it. But you also just throw out the word "typical" in the quote above. Can you state how you know this? Where does one find info on typical filters that are really used in the production of CDs?

I'm not so much interested in speculation about what they might use, but a true validation of the word "typical".

Thanks.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-29 13:19:00
Doing the arithmetic, the experiment used transition bands that were 500 Hz wide, when actual typical audio filters used transition bands that are more like 2-3Kz.
One of the most common criticisms of the "...Typical Audio Filters..." paper, including one of yours, is that the filters are not really typical. This is a powerful criticism if one can validate it. But you also just throw out the word "typical" in the quote above. Can you state how you know this? Where does one find info on typical filters that are really used in the production of CDs?

I'm not so much interested in speculation about what they might use, but a true validation of the word "typical"

The transition band of a typical DAC can be found in its spec sheet, either directly or by strong implication,  and can be measured using a number of different techniques that are commonly used to do frequency response measurements.  

I checked a number of Realtek PC audio chips  sepc sheets and found none, so I've had to measure some chips that I had on hand.  

The test method is simple - play a wideband frequency response test signal, such as an impulse, a swept sine (chirp), a multitone, or a swish, record it with known good ADC operating at a higher sample rate so that the ADC's filters don't interfere, and analyze.

Most other ADC and DAC chips spec sheets give a number for transition band or the companion specs for passband and stop band. The transition band is always between the two.

For example: http://www.akm.com/akm/en/file/datasheet/AK4430ET.pdf (http://www.akm.com/akm/en/file/datasheet/AK4430ET.pdf) is a $0.50 chip with modest specs, and have the stop band and pass band specs on page 6. They are passband ending at 20-22 KHz, and stopband starting at 24 KHz for a transition band  of from 2-4 KHz.

This same info is presented for many competitive devices made by many other sources. Google is your friend.

At the end of the price spectrum we have:http://www.akm.com/akm/en/file/datasheet/AK4414EQ.pdf (http://www.akm.com/akm/en/file/datasheet/AK4414EQ.pdf) a high performance part running about $12.00, with very similar specs.

The pass, transition, and stop band are implements by digital filters  that are well-understood from the standpoint of implementation, and are just collections of well-known, inexpensive,  digital logic components (gates, etc).

The design parameters seem like common sense to me - run the pass band up to 20 KHz to do a good job of covering the audible range, and get the stop band fully blown  just above 22 KHz  to reduce spurious responses.   These same parts run at higher rates and some competitive parts have other optional pass and top band characteristics, to meet perceived needs.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-29 14:10:51
The relevance of the “typical filters” paper in this thread relates to the ADC process, not the DAC, which is what you focus on in your last post. Once I have a CD or Hi-Res source, I can choose whatever DAC I want (50¢ up to ridiculous).

From the paper:
2.3 Signal processing and test conditions

([snip] filter parameters as you quoted [snip])

“These parameters were chosen to offer a reasonable match to the downsampling filters used in good-quality A/D converters or in the mastering process; we wanted to minimise the ripple depth and maximise the stop band attenuation in order to reduce audible ringing artefacts, as described by Lagadec [31].“

Do you have any relevant info on typical filters used in the production of CDs (as I originally asked)?
(Note: this sounds like a back-and-forth with Arny, but I hope anyone who might know would answer)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: jumpingjackflash5 on 2016-08-29 14:40:05


I checked a number of Realtek PC audio chips  sepc sheets and found none, so I've had to measure some chips that I had on hand.  



Realtek DAC have their specs e.g here http://www.hardwaresecrets.com/datasheets/ALC898_DataSheet_0.60.pdf

and typical filter is - page 70 point 9.1.3 - for DAC 0,441*sample rate start and 0,6*sample rate stopband and for ADC 0,45*SR start 0,56*SR stopband (page 70). So yes those filters at the device (sound card) level are approx. 3-4 kHz "long".
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bandpass on 2016-08-29 15:55:38
These are the default settings used by a resampler (izotope rx) that is popular with mastering engineers:
(http://help.izotope.com/docs/rx5/image/image_4IVWYX7QRVS3MSGBHHBKETFTQAHIOAA3)
(link to source page (http://help.izotope.com/docs/rx5/resample.html)).

However, many mastering engineers like to tweak resampler settings to what sounds seems best to them.

For a working figure though, I'd go with the above.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2016-08-29 16:18:58
Doing the arithmetic, the experiment used transition bands that were 500 Hz wide, when actual typical audio filters used transition bands that are more like 2-3Kz.
One of the most common criticisms of the "...Typical Audio Filters..." paper, including one of yours, is that the filters are not really typical. This is a powerful criticism if one can validate it. But you also just throw out the word "typical" in the quote above. Can you state how you know this? Where does one find info on typical filters that are really used in the production of CDs?

I'm not so much interested in speculation about what they might use, but a true validation of the word "typical".

Thanks.
When you play back your cd you will listen to the DAC filter in interaction with the filter used for resampling. Your DAC may filter lower and may make the resampling filter less important.
For the legendary paper as i read it they went 192->44.1->192 for playback. You now have the sum of 2x ringing that gets thru full power. I'd also call that non typical.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bennetng on 2016-08-29 16:25:58
These are the default settings used by a resampler (izotope rx) that is popular with mastering engineers:
(http://help.izotope.com/docs/rx5/image/image_4IVWYX7QRVS3MSGBHHBKETFTQAHIOAA3)
(link to source page (http://help.izotope.com/docs/rx5/resample.html)).

However, many mastering engineers like to tweak resampler settings to what sounds seems best to them.

For a working figure though, I'd go with the above.
And this website as well. A lot of products to choose from.
http://src.infinitewave.ca/
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-29 17:49:41
The relevance of the “typical filters” paper in this thread relates to the ADC process, not the DAC,

Please let me review the title of the paper I quoted for your enlightenment:

"The audibility of typical digital audio filters in a high-fidelity playback system"

What is unclear about the word playback?

Are you not aware that playback involves DACs, not ADC's?

However, it is a moot point because the digital filters used in ADCs and DACs (it may surprise you) can be the same. 

The ADCs used in recording are many and varied. It would take a audited market survey to get data that was appreciably more assured than the information I have already provided.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: drewfx on 2016-08-29 17:58:48

Do you have any relevant info on typical filters used in the production of CDs (as I originally asked)?
(Note: this sounds like a back-and-forth with Arny, but I hope anyone who might know would answer)

For ADC's (like DAC's), you can just go to the chipmaker's sites and look at the datasheets yourself to see what typical is.

For instance, AKM ADC product info can be found here:
http://www.akm.com/akm/en/product/detail/0019/ (http://www.akm.com/akm/en/product/detail/0019/)

Ti here:
http://www.ti.com/lsds/ti/audio-ic/audio-adc-technical-documents.page?viewType=mostuseful&rootFamilyId=376&familyId=581&docCategoryId=2 (http://www.ti.com/lsds/ti/audio-ic/audio-adc-technical-documents.page?viewType=mostuseful&rootFamilyId=376&familyId=581&docCategoryId=2)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-29 19:12:10
These are the default settings used by a resampler (izotope rx) that is popular with mastering engineers:
(http://help.izotope.com/docs/rx5/image/image_4IVWYX7QRVS3MSGBHHBKETFTQAHIOAA3)
(link to source page (http://help.izotope.com/docs/rx5/resample.html)).

However, many mastering engineers like to tweak resampler settings to what sounds seems best to them.

For a working figure though, I'd go with the above.
And this website as well. A lot of products to choose from.
http://src.infinitewave.ca/


Furthermore, a lot of recording production people use hardware-based  resamplers.  IME resampling hardware tend to use the same or similar filters as ADCs and DACs.

Absent an army of data-takers who skulk about a statistically significant number of recording sessions and production studios over the past 10 years and next however many years  doing detailed inspections of equipment digital filter designs, a question this complex will probably never be responded to by anything but informed speculation.

It seems like a simple reasonable question, but not so much.

It does beg the question of why a paper titled "The audibility of typical digital audio filters in a high-fidelity playback system" raised questions about ADC filter design in the first place, if not to create unnecessary ambiguity to help cover the tracks.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-29 19:37:50
Please let me review the title of the paper I quoted for your enlightenment:

"The audibility of typical digital audio filters in a high-fidelity playback system"

What is unclear about the word playback?

Are you not aware that playback involves DACs, not ADC's?
Amazing! To understand the paper, you must read more than the title. It is ambiguous… until you read the paper. You should! They are saying that:
“firstly, there exist audible signals that cannot be encoded transparently by a standard CD; and secondly, an audio chain used for such experiments must be capable of high-fidelity reproduction.” They clumsily combined these in the title, but reading the paper makes clear that the “typical” filters to which they refer are A/D filters.

Note, from the paper:
“Filter responses tested were representative of anti-alias filters used in A/D (analogue-to-digital) converters or mastering processes.“
“Experimental data are presented showing that listeners are sensitive to signal alterations introduced by two CD-like A/D filters and to 16-bit quantization with or without rectangular dither when the reproduction chain is of sufficient quality.”
“These parameters were chosen to offer a reasonable match to the downsampling filters used in good- quality A/D converters or in the mastering process;”
However, it is a moot point because the digital filters used in ADCs and DACs (it may surprise you) can be the same.
It doesn’t surprise me at all. I understand anti-aliasing and reconstruction (anti-imaging) filters quite well. …unlike you… remember you confusion a year ago? I’ll avoid your “lessons” on filter use in sampling.
The ADCs used in recording are many and varied. It would take a audited market survey to get data that was appreciably more assured than the information I have already provided.
Exactly! That is why I don’t trust the use of the word “typical” in the paper. But I also don’t trust your use of it. I can (and have) read many datasheets and understand that many options are available and therefore possible. But I don’t know what is “typical” and as you say, neither do you. It would be better if both you and Stuart’s group would avoid that term without better justification.
I do appreciate bandpass' answer though. He says izotope is popular, and that helps, but I'm still not comfortable with "typical".
a question this complex will probably never be responded to by anything but informed speculation.

It seems like a simple reasonable question, but not so much.
I'm not asking a naively complex question without believing that your use of "typical" should mean you have more than speculation.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-29 21:18:31
Please let me review the title of the paper I quoted for your enlightenment:

"The audibility of typical digital audio filters in a high-fidelity playback system"

What is unclear about the word playback?

Are you not aware that playback involves DACs, not ADC's?
Amazing! To understand the paper, you must read more than the title. It is ambiguous… until you read the paper. You should! They are saying that:
“firstly, there exist audible signals that cannot be encoded transparently by a standard CD; and secondly, an audio chain used for such experiments must be capable of high-fidelity reproduction.” They clumsily combined these in the title, but reading the paper makes clear that the “typical” filters to which they refer are A/D filters.

Note, from the paper:
“Filter responses tested were representative of anti-alias filters used in A/D (analogue-to-digital) converters or mastering processes.“
“Experimental data are presented showing that listeners are sensitive to signal alterations introduced by two CD-like A/D filters and to 16-bit quantization with or without rectangular dither when the reproduction chain is of sufficient quality.”

It is a matter of not being confused by a lame attempt at proof by authority, as opposed to proof by means of reasoning, facts, and logic.

It has looked all long to many people for over a year that that the paper lost its way.

This was pointed out over a year ago in this discussion, which the authors were surely aware of and beholden to respond to:

https://secure.aes.org/forum/pubs/conventions/?ID=416 (https://secure.aes.org/forum/pubs/conventions/?ID=416)

It's clear that the authors look at the title of this paper as window dressing:

"We do not agree with the comments relating to the introduction. The central question in this paper was to determine whether the addition of certain low-pass filters could be detected in an audio chain."

IOW Stuart finds nothing wrong with contradicting the title and introduction of any paper he writes.

So, when the Paper's abstract says: "This paper describes listening tests investigating the audibility of various filters applied in a high-resolution wideband digital playback system." the obvious face value of the English words are apparently not what is meant.

This reminds me of an old saying: "The British and the Americans are two peoples separated by a common language".

This ignores an obvious fact which is that recording and playback are very different operations. For one thing the consequences of recording is as it are one thing that is as it were, cast in cement.  If you redo or change it, you have a different recording.

This contrasts with playback, which is done a little differently each and every time it is done, a little differently for each listener even if for all practical purposes collocated and contemporaneous because listeners can't be colocated closely enough to be contemporaneous or in the identical acoustical location, within tolerances that are easily heard by ear. 

More significantly, it is possible and not infrequently that the identical converters and all other production equipment are identical are used for all published instances of a recording that exist, particularly shortly after the recording is made. However, it is generally accepted that every listener has his own choice of playback equipment which he generally can choose for himself.

Therefore focusing on choices of playback equipment that are made makes a lot of sense because many people do it of their own free will, and educating that free can change the quality of reproduction.

In contrast, the recording equipment and production equipment is what it was when the recording is made, and it is far less likely that the quality of the reproduction of recording can escape the limits that were set when it was recorded and produced at later time.

Thus the paper was interesting because of the contents of its title and abstract which were about playbad., Since the authors now say that they feel feel free to make their paper irrelevant and in some ways contradictory to their title and abstract, many go away after reading it feeling like they were cheated.



Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-29 22:24:02
I'm not asking a naively complex question without believing that your use of "typical" should mean you have more than speculation.

I have provided both examples of and the means by which a technically informed person can (and judging by other posts in this thread) and have satisfied themselves that the means that I have provided are independent evidence based, and thus more than mere speculation.

I have already pointed out that in general modern ADCs and hardware resamplers use digital filters that are very similar to those in comparable DACs, and that those digital filters,  whether in resamplers,  DACs or ADCs  have transition bands that are 2 to 4 or more KHz wide for a 44.1 KHz sample rate.

This contrasts with the far narrower transition bands that were used by John Stuart et al, according to their JAES paper.

Thus the claim that the paper involved typical filters in ADCs, DACs or hardware resamplers continues to be false.

This is not speculation, this is fact whether any particular person choses to recognize it or not.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-08-30 11:16:41
Exactly! That is why I don’t trust the use of the word “typical” in the paper. But I also don’t trust your use of it. I can (and have) read many datasheets and understand that many options are available and therefore possible. But I don’t know what is “typical” and as you say, neither do you. It would be better if both you and Stuart’s group would avoid that term without better justification.
You are right in being sceptical here, but it seems to me you are neglecting the larger picture.

Whether those filters are typical or not is beside the point if you are trying to show that the CD format itself is incapable of audible transparency. For that, you would have to show that the format is not transparent even for the optimal choice of filter characteristic, otherwise you just show that there are bad filter characteristics. The research would be valuable in showing which characteristics to avoid, but it would do nothing to solve the question of whether the format itself is transparent.

The fact that the authors ignore this very basic reasoning does not speak for them. Furthermore, it is quite clear why they conflate this; it is almost certainly deliberate. They are in the market for an "improved" replacement format and need a justification for it. The paper is obviously part of an effort to fabricate this justification.

If somebody is as sensitive to such issues as you have shown to be, I would have expected you to see this larger picture.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-30 12:33:27
Whether those filters are typical or not is beside the point if you are trying to show that the CD format itself is incapable of audible transparency. For that, you would have to show that the format is not transparent even for the optimal choice of filter characteristic, otherwise you just show that there are bad filter characteristics. The research would be valuable in showing which characteristics to avoid, but it would do nothing to solve the question of whether the format itself is transparent.
True, if the goal is finding the optimal capability under perfect, error-free conditions. My car can get 80 mpg, if I drive like I never do (Audi A2 - 3L version). The way I normally drive I get 65 mpg. I find both regimens helpful: ideal and typical.
In the case of recorded music, unless the optimal method is guaranteed or certified, I’m much more concerned with “typical” or “common”, than hoping for perfect behavior and choices of those involved. I buy typical CDs, not fastidiously produced ones. I’m speaking to my interest, not anything stated by others.

The fact that the authors ignore this very basic reasoning does not speak for them.
[snip]
They are in the market for an "improved" replacement format and need a justification for it.
Agree and true.

Furthermore, it is quite clear why they conflate this; it is almost certainly deliberate.
[snip]
The paper is obviously part of an effort to fabricate this justification.
I don’t accept these as fact, but I appreciate your opinion.

If somebody is as sensitive to such issues as you have shown to be, I would have expected you to see this larger picture.
The big picture for me includes what is relevant to buying music produced with imperfect choices and mistakes that human sound engineers would likely make. If typical “hi-res” is audibly better than “typical” CDs, that may be important to some. It would be important to me because I’m interested, but on its own, it would not make my buying decisions. In combination with, say, price, it could.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-30 12:48:09
I have provided both examples of and the means by which a technically informed person can (and judging by other posts in this thread) and have satisfied themselves that the means that I have provided are independent evidence based, and thus more than mere speculation.

I have already pointed out that in general modern ADCs and hardware resamplers use digital filters that are very similar to those in comparable DACs, and that those digital filters,  whether in resamplers,  DACs or ADCs  have transition bands that are 2 to 4 or more KHz wide for a 44.1 KHz sample rate.

This contrasts with the far narrower transition bands that were used by John Stuart et al, according to their JAES paper.

Thus the claim that the paper involved typical filters in ADCs, DACs or hardware resamplers continues to be false.

This is not speculation, this is fact whether any particular person choses to recognize it or not.
I’m going to set aside dealing with your problems with reading comprehension and writing comprehensibly as a waste of time, and get back to my problem with the use of the word “typical”.

If the narrow transition band used in the paper is very atypical, then as you and others have pointed out, the paper is weakened, perhaps fatally. One main focus is whether signals can be encoded transparently by a typical CD. Unless the narrow band is typical or common or even sometimes used, the paper does not address the actual question of the CD’s ability to be transparent in the recording->playback chain.

I assume (and ask for correction, if needed) that the “typical” recording takes the path:
1- record by digitizing at a rate higher than 44.1kHz
2- process the recording (mix, master, etc.) in digital form
3- downsample to 44.1 using a software SRC, e.g. izotope (as mentioned by bandpass)
4- stamp and sell CDs

Yes, I know there are exceptions. But is this typical? If so, user:bandpass points out that the default for izotope doesn’t use such a narrow band, but:
However, many mastering engineers like to tweak resampler settings to what sounds seems best to them.
Although you (Arny) correctly point out that only a market survey gives a reliable answer, I’m curious whether those with experience creating commercial CDs could share their experience.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-30 12:49:57
Whether those filters are typical or not is beside the point if you are trying to show that the CD format itself is incapable of audible transparency. For that, you would have to show that the format is not transparent even for the optimal choice of filter characteristic, otherwise you just show that there are bad filter characteristics. The research would be valuable in showing which characteristics to avoid, but it would do nothing to solve the question of whether the format itself is transparent.
True, if the goal is finding the optimal capability under perfect, error-free conditions.

False, since the 44/16 format is sonically transparent even when processed significantly suboptimally.  It is actually fairly robust and can and has taken on mediocre work and come up sounding really pretty good.

To use the metaphor of imbibing, You don't have to be stone sober to do work like this, you merely have to avoid being falling-down, on the verge of unconsciousness, or actually unconscious,  drunk.

I don't see the technical background that would allow one to understand in your gut how grossly suboptimally Stuart's group had to make things to get the weak positive result that they reported.

They didn't  merely do things suboptimally, they did things so badly at some points that those who are at all experienced in these matters  pretty grossed out with how badly they screwed things up.

So, the whole experiment is more like a straw man.

The preparation of the samples is not the only part of the experiment Stuart's team screwed up pretty badly. I still don't see any evidence that any of Stuart's team can mentally fathom the rather huge differences between ABX1950 and ABX1978.  If they were in a undergraduate course in Experimental Design, at the very least some make up work would be indicated. 

If you wonder what a technical paper written on this topic by a gang who can't shoot straight would be like, both the paper, and its sequels), and the comments in the AES forum would be a good starting point.  ;-)

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-30 13:17:11
I have provided both examples of and the means by which a technically informed person can (and judging by other posts in this thread) and have satisfied themselves that the means that I have provided are independent evidence based, and thus more than mere speculation.

I have already pointed out that in general modern ADCs and hardware resamplers use digital filters that are very similar to those in comparable DACs, and that those digital filters,  whether in resamplers,  DACs or ADCs  have transition bands that are 2 to 4 or more KHz wide for a 44.1 KHz sample rate.

This contrasts with the far narrower transition bands that were used by John Stuart et al, according to their JAES paper.

Thus the claim that the paper involved typical filters in ADCs, DACs or hardware resamplers continues to be false.

This is not speculation, this is fact whether any particular person choses to recognize it or not.
I’m going to set aside dealing with your problems with reading comprehension

Some people have suggested in the past that I don't suffer fools well, but this post like some before it might be evidence that I'[m making quite a bit of progress.

Quote
and writing comprehensibly as a waste of time,

Ditto.

Quote
and get back to my problem with the use of the word “typical”.

If the narrow transition band used in the paper is very atypical, then as you and others have pointed out, the paper is weakened, perhaps fatally.

The fact that others pointed the same thing out and that Stuart's mangling of the logical flow that the paper should have had (like not bring in so much false or merly irrelevant evidence) shows that your purported problems with reading comprehension and writing is just how you handle ideas that disagree with your biases.

Quote
One main focus is whether signals can be encoded transparently by a typical CD.

Actually, Stuart said on the AES forum: "The central question in this paper was to determine whether the addition of certain low-pass filters could be detected in an audio chain."  So it is not "One main focus" it is the main focus. Reading comprehension, indeed.

I should add that if the title and abstract and more of the flow of paper were well built on this foundation, this would be all very fine and good.

Quote
Unless the narrow band is typical or common or even sometimes used, the paper does not address the actual question of the CD’s ability to be transparent in the recording->playback chain.

If one can analyze the data that has been just presented in this thread, there would be no question in one's mind that the extremely narrow bands that were used in the paper are highly atypical to say the least.

This post from this thread shows a transition band that is about 2 KHz wide: https://hydrogenaud.io/index.php?action=profile;u=56644 (https://hydrogenaud.io/index.php?action=profile;u=56644)

This post from this thread [url=https://hydrogenaud.io/index.php?action=profile;u=56644]https://hydrogenaud.io/index.php?action=profile;u=56644 (https://hydrogenaud.io/index.php/topic,112204.275.html)[/url] shows  transition bands that are 1 and 2 KHz wide. 

Quote
I assume (and ask for correction, if needed) that the “typical” recording takes the path:
1- record by digitizing at a rate higher than 44.1kHz

This is a speculative claim that we hear from high rez advocates. In fact a lot of professional work is still done with an initial sample rate of 44 or 48 KHz. Solves the resampling problem for one.  While 48 Khz is a higher number than 44.1 KHz, its not generally considered to be "High rez.". It is what most video equipment uses.   Heck, if you read the papers, you find out that they threw all of SACD and possibly all DSD out as well.

Quote
2- process the recording (mix, master, etc.) in digital form

True in general, of course but there are still some who use analog consoles to mix.

Quote
3- downsample to 44.1 using a software SRC, e.g. izotope (as mentioned by bandpass)

False as a generality since many pros avoid the potential slings and arrows of downsampling by recording at the delivery rate, and also because many of them use hardware resamplers.

Furthermore, there are tons of software resamplers and we can only speculate about what settings are actually used in actual practice.  My experience suggests that defaults are often blindly accepted - it works, doesn't it?

Quote
4- stamp and sell CDs

It is usually more complex than that but in the interest of not wasting time by casting pearls...

Quote
Yes, I know there are exceptions. But is this typical? If so, user:bandpass points out that the default for izotope doesn’t use such a narrow band, but:

Quote
However, many mastering engineers like to tweak resampler settings to what sounds seems best to them.

For a person who likes to pretend he hates speculation, you sure seem to like to spread it around. ;-)
Quote
Although you (Arny) correctly point out that only a market survey gives a reliable answer, I’m curious whether those with experience creating commercial CDs could share their experience.

A good example of biasing the question to obtain fewer responses that in the end are largely irrelevant, since physical CD media is such a tiny fraction of what people actually listen to these days.

Ever hear of OTA (and cable system) broadcast and streaming?

Since most of the audio most of us listen to is delivered that way...
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2016-08-30 14:04:03
Funny is that meanwhile we take the differences heard in the legendary test as being the filter as granted. It still can be something completely different.
Talking about typical i did read from Yuri Korzunov, the coder AuI Converter talks about doing his software for professional studios. It uses a very stepp filter at 20kHz. He has no complaints from his customers or even does this by their wishes.
Graphs at http://src.infinitewave.ca/
I still wonder if having nothing above is simply better :)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-08-30 14:14:12
True, if the goal is finding the optimal capability under perfect, error-free conditions. My car can get 80 mpg, if I drive like I never do (Audi A2 - 3L version). The way I normally drive I get 65 mpg. I find both regimens helpful: ideal and typical.
Sure, so do I. That wasn't the message the authors tried to convey, however. The message they put out was that the CD format isn't completely transparent, hence there's justification for a higher resolution format. That was also the way they were understood by the public. Only when you actually study the paper you become aware that that's not what they have shown.

Quote
In the case of recorded music, unless the optimal method is guaranteed or certified, I’m much more concerned with “typical” or “common”, than hoping for perfect behavior and choices of those involved. I buy typical CDs, not fastidiously produced ones. I’m speaking to my interest, not anything stated by others.
If it is true that you are interested in the typical (quality level of music releases), then you are entirely on the wrong track when you are bothering with this paper. The quality of the typical releases you can buy have next to nothing to do with the limitations of the CD format, or with the exact shape of the filters used in mastering. The quality you are getting is what the producers want you to give. They are not restricted by any technical limitation, only by their budget and their "artistic concept" (which is actually a marketing concept). If you think you are getting inferior quality because the medium CD doesn't allow any better, you merely buy their bullshit.

As this is driven by market forces, don't hope for any kind of guarantee or certification to ensure quality levels. That can't work. Higher resolution formats won't help here, either. They are subject to the same market forces.

Quote
I don’t accept these as fact, but I appreciate your opinion.
You are of course entitled to your own opinion, but note that I didn't present them as fact. It is just the most obvious explanation for their conduct. If you think a different conclusion is warranted, please offer your rationale.

Quote
The big picture for me includes what is relevant to buying music produced with imperfect choices and mistakes that human sound engineers would likely make. If typical “hi-res” is audibly better than “typical” CDs, that may be important to some. It would be important to me because I’m interested, but on its own, it would not make my buying decisions. In combination with, say, price, it could.
The choice of filters in mastering is the mastering engineer's choice, and the resulting CD-format master is invariably going to be rehearsed by both the mastering engineer himself, and by other people involved in the production. If the sound isn't right, one of them ought to complain. If nobody complains, you have to assume that the resulting product is the way they wanted it to be. If that's the case, any hope of other distribution formats improving the situation is futile, unless they specifically want the hires format to sound better because they hope to make a buck that way. But that's not the fault of the CD format, then.

Again, the CD format offers all they need to deliver a good-sounding product. The paper didn't show this to be false, even though that's what they want people to believe.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-08-30 14:39:15
In the case of recorded music, unless the optimal method is guaranteed or certified, I’m much more concerned with “typical” or “common”, than hoping for perfect behavior and choices of those involved.
Audiophile disorder has it's pitfalls.
The exact same applies to wires, amps, etc.
The elitists are always "concerned" with "typical" and thus must have "optimal", you know, rather than "hope for perfect".
Yawn.

I buy typical CDs, not fastidiously produced ones.
Of which there is zero evidence of audible "smear", as concocted in the BS lab test. No commercial ADCs were used.
Hypothetically, even if there was, then that would have been the artists/producers intent.
If you want to start distorting, that, call it so.

I’m speaking to my interest, not anything stated by others.
Sure "Bob".
Zero relevance to the transparency of Redbook...as stated in the Meridian manuals.

I don’t accept these as fact, but I appreciate your opinion.
Sure thing "Bob". This is not charitable work, especially with the loss of MLP revenue.

The big picture for me includes what is relevant to buying music produced with imperfect choices and mistakes that human sound engineers would likely make. If typical “hi-res” is audibly better than “typical” CDs, that may be important to some.
I don’t accept these as fact, but I appreciate your opinion.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-31 10:20:17
The internet is filled with smart people, helpful people, annoying people, trolls and jerks. Some use their real names (but differently on different sites), while some use pseudonyms (different or the same on different sites). It is too much to keep track of everyone, so we all classify or group people. Many only deal with 2 groups: us vs. them, good guys vs. bad guys, objectivists vs. subjectivists, or audiophools vs. rational people. Unfortunately for me, I don’t come out well after such grouping. I am a “them” for a whole lot of people, regardless of how “us” is defined.
I am not a defender of Jackson’s paper or Reiss’ paper. In fact, in this thread, I have only criticized Jackson’s paper. This is true even if I criticize or challenge someone who has criticized or challenged either paper. My wholly unoriginal criticism is that if the filters are atypical, the paper may be interesting to me for several reasons, but completely irrelevant to real-world CDs. Because they don’t address this issue well in the paper, *I* can’t judge the relevance, which is a huge weakness of the paper for me. When someone says the paper’s filters are not typical because “typical” transition bands are 2-3kHz, also with insufficient (in my view) support, I will (and did) call that out. If typically CDs are created with software SRCs, telling me about DAC chips doesn’t help. “Answering” with fully unsupported claims that many CDs are created with tracks recorded at 44.1kHz or if not, they use hardware SRCs is an appeal to self-authority from one who frequently condemns appeals to self-authority.

If it is true that you are interested in the typical (quality level of music releases), then you are entirely on the wrong track when you are bothering with this paper. The quality of the typical releases you can buy have next to nothing to do with the limitations of the CD format, or with the exact shape of the filters used in mastering. The quality you are getting is what the producers want you to give. They are not restricted by any technical limitation, only by their budget and their "artistic concept" (which is actually a marketing concept). If you think you are getting inferior quality because the medium CD doesn't allow any better, you merely buy their bullshit.
It is true that I am interested in the typical (quality level of music releases), but that is not my only interest in life. Since participating in journal clubs in grad school, it has been clear to me that virtually all papers published in peer-reviewed journals have both something to offer (maybe just a lesson in what to avoid) and flaws (sometimes minor, sometimes fatal). It is interesting and fun for me to find both, so I enjoy “bothering with this paper”.
You make an excellent point, that even if the filters used were typical and audible, that effect would be swamped by choices made by the producers. But as a consumer seeking to make informed choices, I want to know both the facts and their relevance.

Quote
You are of course entitled to your own opinion, but note that I didn't present them as fact. It is just the most obvious explanation for their conduct. If you think a different conclusion is warranted, please offer your rationale.
Quote
The paper didn't show this to be false, even though that's what they want people to believe.
In my truly humble opinion, I agree with you. But our opinions on their motivation or goals is not directly relevant to my evaluation of papers. All scientists are human; nearly all scientists work for money (I don’t know of any “gentleman scientists” anymore). If you exclude all papers where the authors are driven by self-advancement, money, praise or pride, and not purely for the advancement of knowledge, the greater good, you would exclude probably all the papers I’ve ever read. If you challenge their motives, what about any research done by private companies? What about all the work done at ATT in the mid-20th century? Using a clear description of the methods, you evaluate the results and conclusions, in spite of the imperfect motives of the authors.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-08-31 13:40:51
In the case of recorded music, unless the optimal method is guaranteed or certified, I’m much more concerned with “typical” or “common”, than hoping for perfect behavior and choices of those involved. I buy "typical" CDs, not fastidiously produced ones.

if the (concocted) filters are atypical, the paper may be interesting to me for several reasons, but completely irrelevant to real-world CDs.
???

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-31 14:39:19
The internet is filled with smart people, helpful people, annoying people, trolls and jerks. Some use their real names (but differently on different sites), while some use pseudonyms (different or the same on different sites). It is too much to keep track of everyone, so we all classify or group people. Many only deal with 2 groups: us vs. them, good guys vs. bad guys, objectivists vs. subjectivists, or audiophools vs. rational people. Unfortunately for me, I don’t come out well after such grouping. I am a “them” for a whole lot of people, regardless of how “us” is defined.
I am not a defender of Jackson’s paper or Reiss’ paper.

You conveyed a different impression.

Quote
In fact, in this thread, I have only criticized Jackson’s paper.

I must have missed that in your vigorous defense of it.

Quote
This is true even if I criticize or challenge someone who has criticized or challenged either paper. My wholly unoriginal criticism is that if the filters are atypical, the paper may be interesting to me for several reasons, but completely irrelevant to real-world CDs. Because they don’t address this issue well in the paper, *I* can’t judge the relevance, which is a huge weakness of the paper for me.

If you can't judge the relevance of an issue, then you should be among the last to challenge it, instead of being the first. That's just common sense.

Quote
When someone says the paper’s filters are not typical because “typical” transition bands are 2-3kHz, also with insufficient (in my view) support,

But you already admitted that you were incabable of properly judging this issue, which I agree with.

Based on your comments:

(1) "Transition band" is just an abstract phrase to you. You don't know where it comes from, where it is, what it is, or how it affects the operation of digital audio gear.

(2) Even when told how and where to find relevant independent authorities, you continued to flog the issue for all it was worth, avoiding examining the relevant origional documents. You were cued as to what it is, how to find it, and where to find it, to no avail. This confirms your current admission that this whole issue that you've set yourself in judgement of, is totally opaque to you.

Quote
I will (and did) call that out. If typically CDs are created with software SRCs,

At this point for you this is pure speculation without one ounce of evidence to back it up.

Quote
telling me about DAC chips doesn’t help.

This would seem to be because you don't understand the common elements of ADCs, DACs, and SRCs.

Quote
“Answering” with fully unsupported claims that many CDs are created with tracks recorded at 44.1kHz or if not, they use hardware SRCs is an appeal to self-authority from one who frequently condemns appeals to self-authority.

My  recent 12 years of professional recording and production work means nothing in your eyes as compared to your idle, poorly informed speculations.  That was preceeded with another 4-6 years of hands on study and experimentation.  And for you?

Just for grins, from which of the golden ear, high resolution proponent forums did you glean these speculative "facts" from?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bennetng on 2016-08-31 15:01:49
Funny is that meanwhile we take the differences heard in the legendary test as being the filter as granted. It still can be something completely different.
Talking about typical i did read from Yuri Korzunov, the coder AuI Converter talks about doing his software for professional studios. It uses a very stepp filter at 20kHz. He has no complaints from his customers or even does this by their wishes.
Graphs at http://src.infinitewave.ca/
I still wonder if having nothing above is simply better :)
When I see the passband graphs of AuI Free and Pyramix 6.2.3 I would say the ripples are totally atypical. Also the sweep from Digital Performer 5.1, Logic 7.2.3, Wavelab 5 internal and Sadie/6. I already excluded all freeware (except AuI Free) and only picked the more famous and expensive commercial ones.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-31 15:27:46
Based on your comments:
(1) "Transition band" is just an abstract phrase to you. You don't know where it comes from, where it is, what it is, or how it affects the operation of digital audio gear.
(2) Even when told how and where to find relevant independent authorities, ...
This would seem to be because you don't understand the common elements of ADCs, DACs, and SRCs.
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me. I don’t have to quiz you to cast doubt on your knowledge, unless you can explain this performance:(link) (https://hydrogenaud.io/index.php/topic,110011.0.html)(hint: the whole thread is only 90 posts, but just read pages 2-3 to get the gist)
Quote
I will (and did) call that out. If typically CDs are created with software SRCs,
At this point for you this is pure speculation without one ounce of evidence to back it up.
Ditto

Quote
“Answering” with fully unsupported claims that many CDs are created with tracks recorded at 44.1kHz or if not, they use hardware SRCs is an appeal to self-authority from one who frequently condemns appeals to self-authority.
My  recent 12 years of professional recording and production work means nothing in your eyes as compared to your idle, poorly informed speculations.  That was preceeded with another 4-6 years of hands on study and experimentation.  And for you?
Please help me check the veracity and strength of these claims. Getting paid a nominal amount to record your church group doesn’t count as “professional recording and production work” for me. Neither does undocumented “study and experimentation”.

So please tell me of a recording I can find where you are in any way credited, and please give me a single peer-reviewed paper where you are an author.

And for me? Any resume I provide will immediately be challenged as unverifiable since I am anonymous (bobbaker is a pseudonym). It would be easier to just match verifiable facts as presented in our posts. If you label something as opinion or belief, I won’t challenge you, and expect the same from you. When you post something as fact, I expect you to be able to back it up, and of course, turnabout is fair play.
.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-08-31 16:10:18
Based on your comments:
(1) "Transition band" is just an abstract phrase to you. You don't know where it comes from, where it is, what it is, or how it affects the operation of digital audio gear.
(2) Even when told how and where to find relevant independent authorities, ...
This would seem to be because you don't understand the common elements of ADCs, DACs, and SRCs.
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.

I did. You totally and abysmally failed.

Quote
And for me? Any resume I provide will immediately be challenged as unverifiable since I am anonymous (bobbaker is a pseudonym). It would be easier to just match verifiable facts as presented in our posts. If you label something as opinion or belief, I won’t challenge you, and expect the same from you. When you post something as fact, I expect you to be able to back it up, and of course, turnabout is fair play.
.

You've already shown by example and admitted outright that turnabout is nothing that you will lower yourself to participate in with any degree of sincerity or effectiveness.

You must really think you are totally smarter than the rest of us.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bobbaker on 2016-08-31 16:20:25
You must really think you are totally smarter than the rest of us.
Nope! Many people here are very smart (and I'm not smarter), ..... but not all.  ;)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2016-08-31 16:24:51
When I see the passband graphs of AuI Free and Pyramix 6.2.3 I would say the ripples are totally atypical. Also the sweep from Digital Performer 5.1, Logic 7.2.3, Wavelab 5 internal and Sadie/6. I already excluded all freeware (except AuI Free) and only picked the more famous and expensive commercial ones.
You see what a typical filter is for experts now. Everyone has its own ;)
Just lately another one just around the corner used by Universal for Oldfield CDs (https://hydrogenaud.io/index.php/topic,111198.0.html)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-08-31 17:44:03
(different or the same on different sites)
I imagine different ones on the same site is implied in there somewhere.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-08-31 22:00:17
The internet is filled with smart people, helpful people, annoying people, trolls and jerks. Some use their real names (but differently on different sites), while some use pseudonyms (different or the same on different sites). It is too much to keep track of everyone, so we all classify or group people. Many only deal with 2 groups: us vs. them, good guys vs. bad guys, objectivists vs. subjectivists, or audiophools vs. rational people. Unfortunately for me, I don’t come out well after such grouping. I am a “them” for a whole lot of people, regardless of how “us” is defined.
Well, we all are trying to find out what to think of the others we meet in the internet, don't we? If you occupy the same spaces for extended periods of time, you train your troll detectors accordingly. The reason why you got the kind of reactions you have seen here is because you pressed quite a few well exercised buttons. People have only a limited amount of patience for this sort of thing, and tend to ground you through a fairly low impedance, accepting the sparks.

Quote
I am not a defender of Jackson’s paper or Reiss’ paper. In fact, in this thread, I have only criticized Jackson’s paper. This is true even if I criticize or challenge someone who has criticized or challenged either paper. My wholly unoriginal criticism is that if the filters are atypical, the paper may be interesting to me for several reasons, but completely irrelevant to real-world CDs. Because they don’t address this issue well in the paper, *I* can’t judge the relevance, which is a huge weakness of the paper for me. When someone says the paper’s filters are not typical because “typical” transition bands are 2-3kHz, also with insufficient (in my view) support, I will (and did) call that out. If typically CDs are created with software SRCs, telling me about DAC chips doesn’t help. “Answering” with fully unsupported claims that many CDs are created with tracks recorded at 44.1kHz or if not, they use hardware SRCs is an appeal to self-authority from one who frequently condemns appeals to self-authority.
You haven't actually made it very clear what your point is. You seemed to make your points up as you go. For example, you belabored the problem with the meaning of "typical", but when I argued that this isn't relevant for the quality level of CD releases, you then separated this topic from what makes you interested in the paper.

So the result is that I still don't know what you actually want to say. You don't want to be seen as a defender of Jackson et.al., but you seem to have a problem with people attacking them. It isn't very clear, however, what problem.

Perhaps you should spend a moment to work out your point in clear language, and then try again here.

Quote
It is true that I am interested in the typical (quality level of music releases), but that is not my only interest in life.
Would it surprise you greatly when I confess that I had already assumed this?

Quote
Since participating in journal clubs in grad school, it has been clear to me that virtually all papers published in peer-reviewed journals have both something to offer (maybe just a lesson in what to avoid) and flaws (sometimes minor, sometimes fatal). It is interesting and fun for me to find both, so I enjoy “bothering with this paper”.
So what is this paper offering for you? Have you already made up your mind?

Quote
You make an excellent point, that even if the filters used were typical and audible, that effect would be swamped by choices made by the producers. But as a consumer seeking to make informed choices, I want to know both the facts and their relevance.
That's what I wanted to help you with. I'm just not sure yet whether it was welcome.

Quote
In my truly humble opinion, I agree with you. But our opinions on their motivation or goals is not directly relevant to my evaluation of papers. All scientists are human; nearly all scientists work for money (I don’t know of any “gentleman scientists” anymore). If you exclude all papers where the authors are driven by self-advancement, money, praise or pride, and not purely for the advancement of knowledge, the greater good, you would exclude probably all the papers I’ve ever read. If you challenge their motives, what about any research done by private companies? What about all the work done at ATT in the mid-20th century? Using a clear description of the methods, you evaluate the results and conclusions, in spite of the imperfect motives of the authors.
I thought that was exactly what I was doing. I don't know why I deserved this lecture. Right from the start, I stated that the paper might provide some valuable evidence regarding which filter shapes to avoid, specifically that too narrow a transition band might actually be counterproductive. Note my choice of wording, which indicates that I'm not yet convinced of that, and would like to have it checked independently. I am absolutely convinced, however, that their conclusion is bunk. Their motives are clearly dominant here.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-08-31 22:42:11
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.
Ok. What are you concerns with the "typical" CDs you buy?

I am anonymous (bobbaker is a pseudonym).
So you could be just a casual curious observer...or someone with skin in the game.

Btw, this thread is about the meta-analysis, not the BS paper per se.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-01 00:25:48
So the result is that I still don't know what you actually want to say.
I do know that a substantial portion of his posts suggests he likes knawing at Arny's ankles.  Feel free to have a look at the contributions from his "bobbaker" account since he started using it (link (https://hydrogenaud.io/index.php?action=profile;u=119374;area=showposts;sa=messages;start=25)).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-01 08:54:13
So the result is that I still don't know what you actually want to say.
I do know that a substantial portion of his posts suggest he likes knawing at Arny's ankles.  Feel free to have a look at the posts from his "bobbaker" account since he started using it (link (https://hydrogenaud.io/index.php?action=profile;u=119374;area=showposts;sa=messages;start=25)).

Good point. My take is that this time he was highly focused in debunking criticism of the two AES papers. 

This time his initial salvo  featured an personal attack on my credibility, and the critical findings that were posted by several independent sources on the AES Conference web site in reference to the BS paper.

For our mutual speculation, here is an "Interesting coincidence" suggesting that BobBaker may be more than a nym:

http://www.bob-baker.com/ (http://www.bob-baker.com/)

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-01 08:55:19
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.
Ok. What are you concerns with the "typical" CDs you buy?

I am anonymous (bobbaker is a pseudonym).


So you could be just a casual curious observer...or someone with skin in the game.

For our mutual speculation, here is an "Interesting coincidence" suggesting that BobBaker may be more than a nym:

http://www.bob-baker.com/

The disclaimer could be a lame attempt to disconnect his real world self from the failure and embarrassment that his contributions to this thread has ended up being for him.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-02 00:41:55
For our mutual speculation
Speculation?  It has already been stated that I have superpowers, one of them namely being omniscient.

Seriously though, I have deeper access to the information available on this forum.  But if it helps, then sure:  you're welcome to think that I'm speculating.

The other bob baker should thank you for spamming our site with his link to his services; not once, but twice.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: WernerO on 2016-09-02 09:37:05
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.   The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this? There is no reason for a narrow transition band to affect the pass band like you report. Might it be that your filter design is less than competent?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bandpass on 2016-09-02 13:43:12
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this? There is no reason for a narrow transition band to affect the pass band like you report. Might it be that your filter design is less than competent?

Sounds like a bad filter to me too. E.g. commonly-used filter design methods include: windowed-sinc, where passband ripple is largely independent of transition band width (in fact, it closely tracks the stopband ripple); and Parks-McClellan, where passband ripple can be set as small (or large) as you like, independently of both stopband ripple and transition band width.

E.g., passband ripple of SSRC (very narrow transition band):
(http://src.infinitewave.ca/images/Passband/SSRC_HP.png)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-02 14:04:31
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this?

While I have the results that I posted, I'm not satisfied that they are the best I can do with a reasonable effort, so I'm teaching myself Octave, which seems to have more the tools that I need. In particular  it has tools for designing and simulating a wider variety of digital filters, including some that seem to be exactly like the digital filters in DACs, ADCs, and sample rate converters, whether software or hardware. So, I labelled my results "preliminary"

What I saw in the tests I did is that simulating filters with extremely narrow transition bands created large and broad dips and peaks at frequencies that were well below the design frequencies of the filters. The filters I was working with seemed well-behaved as long as I used them in ways that seemed to be typical to me.

Quote
There is no reason for a narrow transition band to affect the pass band like you report.

That's what I thought at first, but I tried my tests with a variety of filters from a variety of sources, not just one filter from one implementer of filters.

Quote
Might it be that your filter design is less than competent?

The filters behaved well with typical sets of parameters. IOW with transition bands in filters operating around 20 KHz, that were at least several 100 Hz wide. Trouble is, one of the papers talked about training listeners with filters that were a small number of Hz wide.

It's not unusual for well-behaved designs to become unstable when operated with extreme sets of parameters. I've seen this happen before, and not just with filters. I have found that every good design methodology has a natural range of effective performance, but you can stretch things just too far.  This particular set of experimenters don't seem to worried about such mundane things, in many areas, not just filters. Since they were working with abstractions, they could easily ignore whatever didn't interest them.  For example they could have published typical response curve for all the modes they used the filters in and provided exact filter characteristics and designs, but they didn't.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-02 14:11:19
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this? There is no reason for a narrow transition band to affect the pass band like you report. Might it be that your filter design is less than competent?

Sounds like a bad filter to me too. E.g. commonly-used filter design methods include: windowed-sinc, where passband ripple is largely independent of transition band width (in fact, it closely tracks the stopband ripple); and Parks-McClellan, where passband ripple can be set as small (or large) as you like, independently of both stopband ripple and transition band width.

E.g., passband ripple of SSRC (very narrow transition band):
(http://src.infinitewave.ca/images/Passband/SSRC_HP.png)

Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-09-02 14:30:29
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.
Ok. What are you concerns with the "typical" CDs you buy?
Fail
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-02 15:20:46
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.
Ok. What are you concerns with the "typical" CDs you buy?
Fail

I observe that this person does not in general answer reasonable questions that other people ask.  You don't really discuss issues with him, instead he tries to keep you busy dealing with his insulting and imaginative false claims.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bandpass on 2016-09-02 18:02:16
Quote from: Arnold B. Krueger date=1472821879
Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?
The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-02 18:25:40
Quote from: Arnold B. Krueger date=1472821879
Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?
The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).


The first post said it was from SSRC, but now that I ask for details, they are not forthcoming?

I'm  looking for factual evidence, not mere speculation about nameless SRC software.

Is this from Bob Barker or the Meridian gang under a different alias?

Besides, there's more than frequency response to filters. There could be other things such as phase response that make this currently nameless filter audibly flawed.  Next time, no mystery meat, please?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2016-09-02 18:32:27
Quote from: bobbaker
I buy typical CDs, not fastidiously produced ones. I’m speaking to my interest, not anything stated by others.

e for me includes what is relevant to buying music produced with imperfect choices and mistakes that human sound engineers would likely make. If typical “hi-res” is audibly better than “typical” CDs, that may be important to some. It would be important to me because I’m interested, but on its own, it would not make my buying decisions. In combination with, say, price, it could.

But there's the rub.  What if 'typical' hi rez (or at least, the hi rez used to establish its superiority to CD) is more 'fastidiously produced'?

Is the difference really down to 'hi rez vs CD' formats in that case...or simply, production practices?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-02 18:33:53
no mystery meat, please?
Mystery meat?  How a long-time forum member can't know anything about SSRC is the true mystery.

A simple google search would have gotten you what you needed, but at any rate...
https://github.com/shibatch/SSRC
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bandpass on 2016-09-02 22:48:49
Let's switch tack. Arny, here's some Octave code that I think does what you want:
Code: [Select]
pkg load signal

% Design params:
fn=48000  % Nyquist freq. (Hz)
fc=22050  % Corner freq. (Hz)
tbw=20    % Transition band width (Hz)
attn=100  % Stopband attenuation (dB)

% Make filter:
d=10^(-attn/20)
[n, w, beta, ftype] = kaiserord ([fc-tbw/2, fc+tbw/2], [1, 0], [d d], fn*2);
b = fir1 (n, w, kaiser (n+1, beta), ftype, "noscale");

% Plot magnitude response:
[h f] = freqz(b,1,2^18); plot(f/pi*fn, 20*log10(abs(h))); grid; pause

% Zoom on pass band:
axis([0 1.1*fc -.1 .1]); pause

% Zoom on transition band:
axis([fc-50 fc+50 -12*beta 10]); pause
It implements a 96000 -> 44100 brick-wall decimation filter; passband ripple is negligible.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-03 17:22:42

The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).

I tested SRC  with default parameters specifying only that the 96 KHz input file be downsampled to 44,100 Hz.

The transition band as tested with 96 KHz sample rate files containing in one case multitones (on 100 Hz centers) and in the other a 96 KHz  sample rate Swish, and found the transition band to be about 900 Hz wide. The ripple was indeed very low, but the transition band width was at the lower end of the normal range.

Next: The octave file you kindly provided.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-03 17:28:24
no mystery meat, please?
Mystery meat?  How a long-time forum member can't know anything about SSRC is the true mystery.

Who might that be?

Couldn't be me, because I said it was PD and easy to download. What I didn't say but could have, is that I had used it in the past, when my information needs were not as detailed.

The mystery was the details of the test results.  The plot provided lacked the detail  required to accurately determine the transition band.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-03 18:09:33
It might have been you, but you've made it clear that I was mistaken.  Still, please don't be shocked when your theory about digital filters with narrow transition bands falls apart.

Now, maybe you can tell me how any of this belongs in a topic about Reiss's meta analysis.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-09-04 10:15:31
It might have been you, but you've made it clear that I was mistaken.  Still, please don't be shocked when your theory about digital filters with narrow transition bands falls apart.

I'm surprised that I have to remind an omniscient being about how Science works. You develop hypotheses, you do experiments, you make observations, you critique your work, and when you think you have some kind of final answer you present it as such, and until then you label it "preliminary".  If the final evidence supports the hypothesis then that is one possible outcome that you rejoice in it because you learned something, and if the final evidence does not support or even  contradicts the hypothesis, you rejoice in it because you learned something.'

Phases like "fall apart" suggest to me some kind of emotional attachment to one outcome or the other. Who are you talking about?

Quote
Now, maybe you can tell me how any of this belongs in a topic about Reiss's meta analysis.

Again, that this is phrased as a question, suggests a failure to be omniscient.  

One very important characteristic of a meta analysis is that the studies that made up the meta analysis are more than a tiny bit related to each other. 

In the midst of this apparent failure of omniscience, I'm forced to remind people that a study was done of the program material that was used in the various studies that were Reiss's 20-odd final choices in his meta analysis, and they were actually quite varied and in fundamental ways. All but one were so poorly described and documented in the study that recreating it by an independent worker would be impossible.  IMO not good candidates for merging together in a meta-analysis.

Another issue that is fundamental to this group of studies was the nature of the narrow-band version of the signal that was compared to the wide-band so-called high resolution form of the same basic audio signal. This signal has at least three important characteristics, namely the bandpass high frequency limit, the width of the transition band, and the low limit of the presumably (for any format that pretends to be sonically transparent) ultrasonic stop band.  All three of these pieces of data can be accurately deduced from the other two, which is important because often only two of them are well-documented.

If these studies varied significantly in terms of the basic nature of the so-called low resolution signal, then they are again poor choices for a meta-analysis.  The word significantly is logically linked to audibility, so this criticism of Reiss's alleged meta-study would be related on whether the low pass filters that were used in the component studies were similar enough to be lumped together in the meta-study.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: bennetng on 2016-09-04 12:00:40
One possible consequence of a steep filter is higher intersample peaks. The paper suggests not all audio players have or have enough headroom to deal with it.
https://service.tcgroup.tc/media/Level_paper_AES109(1).pdf

Some DAC makers like Benchmark Media aware of the issue as shown in this thread.
https://hydrogenaud.io/index.php/topic,98753.0.html

Some users believe that they should not alter volume in digital domain and prefer to use an analog volume control without knowing or don't caring about intersample peaks.

dBTP in audio metering is introduced to deal with this issue but all I know about is 4x upsampling in 44/48k and 2x upsampling in 88/96k are used to estimate TP without knowing other details. Also the popularity of dBTP metering in audio industry is unknown to me.

Of course I can always use ABX to test it myself but I would like to know if there any studies or papers about the audibility of such issues or not.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2016-09-04 18:28:20
If resampling has inter-sample overs the source already should contain inter-sample overs. Distributed music may as last step have a slight volume drop to make it non obvious to Audacity cowboys. It was often talked about it but i still have not a single sample that sounds clipped from the resampling process.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-04 18:34:08
Ah, but there's an important "advantage," especially for trained listeners allowing them to do slightly better in distinguishing a difference than flipping a coin.  The apparent unknown cause for this difference isn't supposed to matter.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: greynol on 2016-09-04 19:14:16
In case SoundAndMotion, SoundAndMotion2, and Jakob1863 are still following along...
https://hydrogenaud.io/index.php/topic,107124.msg883558.html#msg883558
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: old tech on 2016-11-07 01:41:16
Hi all

I’m a newbie here so please be gentle.  I have followed this thread with interest for nearly a year.  As I understand it, a lot of the discussion here spawns from the meta-analysis study which may be flawed due to the inclusion of several studies which have shown to be lacking in methodological rigour or those which have later been refuted.

There is a lot of discussion about potential effects of different ADCs and DACs, resampling and so on.  I’m no scientist or audio engineer but what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

The reason I ask is that I thought the whole debate about hi res vs CD quality was resolved back in 2007 with the Moran and Myer study in the link below, noting that this study was built upon several studies before it which established the transparency of the CD. 

Where is the critique of the Myer and Moran study which led to an arguably inferior meta-analysis study we are discussing here?  I would have thought that a year- long multi trial study using trained musicians, recording producers, audiophiles as well as the average Joe Blo, some using their own music in their own homes on their own equipment would be definitive.

Surely Meridian or another body disagrees with the 2007 paper they could spell out why and then design another similar year long, multi-trial, multi subjects experiment to back their cause?

BTW, this is not a rhetorical question, it is something that I genuinely find puzzling.

http://drewdaniels.com/audible.pdf
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Thad E Ginathom on 2016-11-07 19:33:35
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

Fear.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-11-08 13:19:05
  As I understand it, a lot of the (recent) discussion here spawns from the meta-analysis study which may be flawed due to the inclusion of several studies which have shown to be lacking in methodological rigour or those which have later been refuted.

The problem with the meta analysis is far more basic than that.  The meta analysis itself involves 20-ih different attempts (since 1980, most after Y2K)  to show an audible benefit to so-called high resolution audio.  The vast majority of them failed to obtain results that were significantly positive for their thesis. The author took advantage of the well known fact that you can combine the results of individually failed tests to create the appearance of significant results. However combining tests is valid only if the individual tests are themselves valid, and also very similar to each other. Examination of the papers describing the tests shows that this is not the case. So, the Meta analysis itself failed during the preliminary selection of tests, and even before the detailed analysis was started.

A second problem is that the author seems to have based his claims about the significance and relevance of his results quite heavily on statistical significance.  As pointed out above, impressive numbers for statistical significance can  fabricate  by simply doing an experiment whose results are largely random a very large number of times.  An experiment with > 150 trials with 95% Statistical Significance  need have less than 60% correct answers, which is rather close to 50% correct answers which is what you get if the listeners in an A/B test are guessing purely randomly.  I don't think that many consumers are going to do > 100 listening trials to obtain "Evidence of their ears" that something sounds better.


Quote
There is a lot of discussion about potential effects of different ADCs and DACs, resampling and so on.

General consensus: They sound so similar that they don't matter.

Quote
  I’m no scientist or audio engineer but what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

Now, each of those 20-ish tests was no doubt a test that its proponents thought would be the be-all end-all  definitive test that would finally  settle the question affirmatively and forever, but that is  not how the data worked out. Therefore, we have something like 20 years of history of more than 20 tests whose results strongly suggest that the next test or 5 tests or 10 tests will not finally settle the question, either.

Quote
The reason I ask is that I thought the whole debate about hi res vs CD quality was resolved back in 2007 with the Moran and Myer study in the link below, noting that this study was built upon several studies before it which established the transparency of the CD. 

Where is the critique of the Myer and Moran study which led to an arguably inferior meta-analysis study we are discussing here?  I would have thought that a year- long multi trial study using trained musicians, recording producers, audiophiles as well as the average Joe Blo, some using their own music in their own homes on their own equipment would be definitive.

Surely Meridian or another body disagrees with the 2007 paper they could spell out why and then design another similar year long, multi-trial, multi subjects experiment to back their cause?

BTW, this is not a rhetorical question, it is something that I genuinely find puzzling.

http://drewdaniels.com/audible.pdf


I believe you can download an AES paper that contains a critique of Meyer and Moran's study Here:

http://www.aes.org/e-lib/browse.cfm?elib=18296 (http://www.aes.org/e-lib/browse.cfm?elib=18296)

The paper is claimed to be "open access" which means that you should be able to download it for free.

It is as follows:

"
2.2 Meyer 2007 Revisited

Meyer 2007 deserves special attention, since it is
well-known and has the most participants of any study, but
could only be included in some of the meta-analysis in Sec.
3 due to lack of data availability. This study reported that
listeners could not detect a difference between an SACD or
DVD-A recording and that same recording when converted
to CD quality. However, their results have been disputed,
both in online forums (www.avsforum.com,
www.sa-cd.net, www.hydrogenaud.io and secure.aes.org/forum/pubs/journal/)
and in research
publications [11, 76].
First, much of the high-resolution stimuli may not have
actually contained high-resolution content for three reasons;
the encoding scheme on SACD obscures frequency
components above 20 kHz and the SACD players typically
filter above 30 or 50 kHz, the mastering on both the
DVD-A and SACD content may have applied additional
low pass filters, and the source material may not all have
been originally recorded in high resolution. Second, their
experimental set-up was not well-described, so it is possible
that high resolution content was not presented to the
listener even when it was available. However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions.
Third, their experiment was not controlled. Test subjects
performed variable numbers of trials, with varying equipment,
and usually (but not always) without training. Trials
were not randomized, in the sense that A was always the
DVD-A/SACD and B was always CD. And A was on the
left and B on the right, which introduces an additional issue
that if the content was panned slightly off-center, it might
bias the choice of A and B.

Meyer and Moran responded to such issues by stating
[76], “... there are issues with their statistical independence,
as well as other problems with the data. We did
not set out to do a rigorous statistical study, nor did we
claim to have done so. ...” But all of these conditions...
"

(please see linked article for the rest of this critique)

The biggest problem with the Meyer and Moran paper is that it presumed that the recording industry is honest and transparent and can be taken at their word when they advertise a product as being high resolution.  It was later discovered that the record industry had misrepresented  as being high resolution, products that had uncorrectable low resolution provenance.  Since they did not individually qualify the actual content of each recording by reliable means, it is probable that on the order of 50% of the recordings their study was based on were actually CD-quality or worse.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Thad E Ginathom on 2016-11-08 19:34:52
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?
Fear.


OK, so it was a "clever" single-word answer.  Whilst maybe, just maybe, those guys do not want to take the risk, I think it is more that they have totally bought into their delusions. They have no room for doubt. And if they fail a test, it was the test that was wrong.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pdq on 2016-11-08 21:47:18
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

Fear.


Greed.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: old tech on 2016-11-08 23:18:20
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?
Fear.


OK, so it was a "clever" single-word answer.  Whilst maybe, just maybe, those guys do not want to take the risk, I think it is more that they have totally bought into their delusions. They have no room for doubt. And if they fail a test, it was the test that was wrong.

I can understand that from the Meridian's perspective, but it seems strange that the AES, a University or some other independent body has not done so.  I would have thought from a science perspective there would be a lot of keen researchers jumping to do something like it.

Of course, there is an argument that there is nothing in digital audio technicalities or the science which would suggest that the difference between 16/44 and hi res would be audible to humans (assuming all variables other than the bit depth and sample rates are controlled) along with the Myer and Moran study and the studies preceding it having settled this issue to most reasonable minded people.

I know it doesn't matter how well digital theory is understood and how many well designed tests are conducted, it will still not convince those with strong beliefs or faith and those with a commercial agenda to believe or spread misinformation.  However, the (overstated) criticism of the Myer and Moran paper is that most of the listening material was not from a hi res master.  That is where the focus should be, ie replicating what was a well designed study but ensuring all source material is actually from a hi res source.

That of course glosses over the more important, indirect shadow test which was that for many years the general public have been purchasing and playing those SACDs and DVD-As and yet for many years not one golden eared reviewer or audiophile picked them out as not being hi res.  In the end it was measurements rather than listening tests which confirmed they were not from hi res masters.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-11-09 10:58:12
I can understand that from the Meridian's perspective, but it seems strange that the AES, a University or some other independent body has not done so.  I would have thought from a science perspective there would be a lot of keen researchers jumping to do something like it.
There would have to be funding. Neither the AES nor other independent bodies have the funds and/or the motivation to make this happen. The AES could be expected to organize it, provided that the money comes from somewhere. The Meyer/Moran study is already exceptional in this regard, you can't expect this to happen easily.

Quote
I know it doesn't matter how well digital theory is understood and how many well designed tests are conducted, it will still not convince those with strong beliefs or faith and those with a commercial agenda to believe or spread misinformation.  However, the (overstated) criticism of the Myer and Moran paper is that most of the listening material was not from a hi res master.  That is where the focus should be, ie replicating what was a well designed study but ensuring all source material is actually from a hi res source.
The understanding of digital theory is one thing, but more important is the understanding of human hearing. Both are developed enough that there really shouldn't be a question about HRA being audible. This is also one of the reasons why there aren't more scientific studies. Amongst the more clued up scientists, there isn't much hope of finding anything that differs significantly from what we already know. The whole thing is wishful thinking on behalf of those who see a chance to make a buck.

The meta-analysis, and also the criticism of Meyer/Moran, only show that to be true. To criticise Meyer/Moran because they included material that was "not really" Hi-Res is quite hypocritical. There is still no clear definition of what Hi-Res means today, and that's 10 years after their study. On what grounds should they have drawn a line? They did the sensible thing: They took the material that was available as Hi-Res material, so they took what other people said was Hi-Res, thereby avoiding a decision that would have been controversial no matter how it went. Their result means that what the market presented to the consumer as being Hi-Res was indistinguishable from the same material converted down to 44.1/16. It shows convincingly that the Hi-Res market was a fraud 10 years ago. My opinion is that this is at least as true today as it was back then.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-11-09 13:50:25
However, the (overstated) criticism of the Myer and Moran paper is that most of the listening material was not from a hi res master. 
That has become immaterial, now that we have MiracleQA.
From the BS man himself (http://www.stereophile.com/content/mqa-and-warner-real-scoop#1f8yJZ7dEYo44OVK.97):

Quote
CD-quality masters? That's hardly high-resolution.
Sure, but it's about the music, right? Stuart indicates that MQA is not about high resolution in the usual sense; it's about "authenticity".
"As far as we're concerned, anything from a cylinder forward is legitimate as long as it's the definitive statement about a recording," Stuart told me. "If a recording is important enough, and all there is is a 78, that's where we start. . . We're really concerned about producing the definitive thing," not the thing with the highest bit depth or sampling rate.

Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: pelmazo on 2016-11-09 19:29:18
In short: Hi-Res is a "definitive statement", presumably made by those who pocket the money. With this quote BS vindicates Meyer and Moran in their choice of material. He just wouldn't admit it.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: old tech on 2016-11-11 05:03:12
Arnold B. Krueger et al

Thanks for all that info, but the point remains - surely it is timely to do another Myer and Moran type study which addresses all those critiques that are relevant, particularly as technology and formats have progressed since 2007 - ie sourcing of hi res material using PCM flac files rather than DSD, randomisation of subjects, a more sophisticated A/B/X switch etc.  I'm sure the results would be the same just as I am sure it still will not silence the critics but it should raise the bar much higher for the Stuart's of this world.  However as you say, resourcing such a study is an issue.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2016-11-11 16:03:08
surely it is timely to do another Myer and Moran type study
Yes if you are a believer. No if you have >2 functional brain cells.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2016-11-11 18:59:01
surely it is timely to do another Myer and Moran type study
Yes if you are a believer. No if you have >2 functional brain cells.

Some might find the above a little harsh, but the point being made is factual.

I read each of the approximate 20 studies that went into the Meta-Analysis.

At least half of them looked pretty good to me, which is to say that they looked like someone gave it a heck of a try, and they did not find statistically significant results supporting any real-world need for so-called high resolution audio as compared to a good implmentation of the Redbook audio CD standard.

Add to this the similar tests that I've done for myself, and others that have been reported.

Then, there is the fact that all established knowledge of the performance of the human ear says basically the same thing.

Not being sadistic or masochistic, I'm not asking for any further testing.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-04 16:50:32
I hope it's okay that I bump this thread, as I hope a friendly person could summarize a bit for me as well as inform me about a few other related things that were being debated in other similarly themed threads some years back (and as this thread is current it makes more sense for me to post here). I'm not as technically-minded as most here, so simple explanations, easy to understand numbers, and yes/no answers are greatly appreciated whenever possible :-).
I didn't read all 15 pages of this thread but read a bit here and there (mostly beginning and end). Arny's last comment seemed to summarize everything pretty well: So far no reliable paper has shown that hi-res can reliably be distinguished from CD quality in a listening test. Correct?

A while ago I read the AES paper about distinguishing between 44.1 and 88.2 kHz sample rates (this one: http://www.aes.org/e-lib/browse.cfm?elib=15398), and today I read the HA discussion about this. I was still a bit unsure if this paper was actually considered to properly show a difference could be detected. Can anyone elaborate/explain?
My own understanding of that paper seemed to be that they concluded that three people who answered wrong were considered as answering right, and their "statistically significant" results were only around 60 % correct. This seemed a strange conclusion to me.

Lastly, in 2010 the HA user 2Bdecided started a thread about the audibility of brickwall filters (this: https://hydrogenaud.io/index.php/topic,68524.0.html). What came of that? There were several positive results. Did anybody draw a conclusion about brickwall filters actually being audible in general, or was there a flaw in the files, software or the methodology?

Thanks, everybody :-)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Joe Bloggs on 2017-01-04 18:29:08
I hope it's okay that I bump this thread, as I hope a friendly person could summarize a bit for me as well as inform me about a few other related things that were being debated in other similarly themed threads some years back (and as this thread is current it makes more sense for me to post here). I'm not as technically-minded as most here, so simple explanations, easy to understand numbers, and yes/no answers are greatly appreciated whenever possible :-).
I didn't read all 15 pages of this thread but read a bit here and there (mostly beginning and end). Arny's last comment seemed to summarize everything pretty well: So far no reliable paper has shown that hi-res can reliably be distinguished from CD quality in a listening test. Correct?

A while ago I read the AES paper about distinguishing between 44.1 and 88.2 kHz sample rates (this one: http://www.aes.org/e-lib/browse.cfm?elib=15398), and today I read the HA discussion about this. I was still a bit unsure if this paper was actually considered to properly show a difference could be detected. Can anyone elaborate/explain?
My own understanding of that paper seemed to be that they concluded that three people who answered wrong were considered as answering right, and their "statistically significant" results were only around 60 % correct. This seemed a strange conclusion to me.

Lastly, in 2010 the HA user 2Bdecided started a thread about the audibility of brickwall filters (this: https://hydrogenaud.io/index.php/topic,68524.0.html). What came of that? There were several positive results. Did anybody draw a conclusion about brickwall filters actually being audible in general, or was there a flaw in the files, software or the methodology?

Thanks, everybody :-)

Well I can't comment on the rest of your questions, but for the brickwall filter test, note that everybody settled on trying to ABX the maximum phase filter because 2B said that should be the easiest to ABX.  Ahem, nobody in their right mind would use a maximum phase filter for the brickwall, so...
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-04 18:34:21
Well I can't comment on the rest of your questions, but for the brickwall filter test, note that everybody settled on trying to ABX the maximum phase filter because 2B said that should be the easiest to ABX.  Ahem, nobody in their right mind would use a maximum phase filter for the brickwall, so...
So, the test that generated positive results used a type of filter that is not in use in any DAC in production?
If I understand that correctly, is a filter like that a filter that cuts nothing off above the Nyquist frequency?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-04 19:12:17
So far no reliable paper has shown that hi-res can reliably be distinguished from CD quality in a listening test. Correct?
No. No paper has shown audibility related to the real world of adult hearing, consumer content, ambient room noise and systems. Pathological and concocted, hearing damage threshold examples are always possible.
So..https://hydrogenaud.io/index.php/topic,112204.msg925135.html#msg925135 (https://hydrogenaud.io/index.php/topic,112204.msg925135.html#msg925135)

Jan 2017. Still waiting.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-04 19:31:38
Thanks, Ajinfla :-)!
Actually, I find Mark Waldrep's video presentation of hi-res on Youtube a really good video, although I don't agree with him per se, but I think he presents it in a completely honest way, and he basically admits that nobody has been able to provide substantial evidence for hi-res' audible superiority. Instead he seems to say that for intellectual satisfaction he wants to preserve everything that was in the original signal despite it being inaudible. Although I find it pointless, I can respect that :-). Not to mention that's a much better reasoning than "I can bloody well hear the difference, because it's so goddamn obvious, but I refuse to take a blind test" as many people (especially one reviewer, you might be able to guess who) say.

I'll quote myself now:
Well I can't comment on the rest of your questions, but for the brickwall filter test, note that everybody settled on trying to ABX the maximum phase filter because 2B said that should be the easiest to ABX.  Ahem, nobody in their right mind would use a maximum phase filter for the brickwall, so...
So, the test that generated positive results used a type of filter that is not in use in any DAC in production?
If I understand that correctly, is a filter like that a filter that cuts nothing off above the Nyquist frequency?

I got a bit confused when I googled "maximum phase filter". A webpage said it was the same as an all-pass filter, but when I look at the pictures (again) that was posted on that specific HA discussion, the picture clearly shows content has been cut off at the top. So, I probably misunderstood :-).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-04 20:26:01
By the way: What do you guys think of the results in the paper that gave scores of anywhere from 55 % to 74 % correct? For those of you who have read those specific papers, are you able to sum up why those papers are not reliable?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-05 00:00:10
Thanks, Ajinfla :-)!
You're welcome.

By the way: What do you guys think of the results in the paper that gave scores of anywhere from 55 % to 74 % correct?
Given your acknowledgement of what I wrote, why don't you tell us?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Porcus on 2017-01-05 14:19:05
So far no reliable paper has shown that hi-res can reliably be distinguished from CD quality in a listening test. Correct?

As ajinfla hints at, there are some reservations to a "no", if you are interested in listening to 110 dB of a 26 kHz sine wave atop 60 dB of white noise. 
28 kHz detected: https://dx.doi.org/10.1121/1.2761883 or http://asa.scitation.org/doi/10.1121/1.2761883 .
And 24: http://doi.org/10.1250/ast.27.12
The studies are not music-focused, nor do they at all pretend to be.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Joe Bloggs on 2017-01-05 15:26:01
I got a bit confused when I googled "maximum phase filter". A webpage said it was the same as an all-pass filter, but when I look at the pictures (again) that was posted on that specific HA discussion, the picture clearly shows content has been cut off at the top. So, I probably misunderstood :-).

Maximum phase as a description bears no relationship to what frequencies are cut off or preserved (the latter concerns the shape of the filter in the frequency-amplitude dimension).  Rather it is a description of how the filter changes the [time relationships between frequencies] (phase response) as a result of whatever frequencies are cut off or preserved.

In the context of the impulse response to a brickwall lowpass filter, you have a relatively unchanged main impulse body, and frequencies in the transition band, (i.e. the band of frequencies where the filter *transitions* from "not filtering at all" to "filtering everything out") which appear "spread out" in a Fourier analysis.  The shape of this "tail" is affected by the phase characteristic of the filter.
(https://photos.smugmug.com/Other/Equalization/i-gQ5f4dV/0/O/brickwall%20phase%20illustration.png)

Here from top to bottom are the shapes of minimum, [something between minimum and linear], linear, and maximum phase brickwall filter impulse responses.

I said "nobody in their right mind would use a maximum phase filter for the brickwall" because there's no technical merit to it.  Linear phase preserves timing relationships between all frequencies, while minimum phase can be audibly superior (in the case of the transition band being actually at audible frequencies) because human auditory masking of nearly-concurrent sounds mostly occurs shortly *after* a sound event, not *before*, and minimum-phase has the added benefit of being implementable using zero-latency IIR filters.  The 2nd filter (with a little ringing before and most after) phase response may hide the ringing best (again in the case of a transition band that is audible in the first place).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-06 23:33:42
Thanks, Joe Bloggs. Is the linear phase filter the one most commonly used in DACs? I have the impression that the minimum-phase filter wasn't introduced into DACs until fairly recently, but that's just a hunch :-).

Ajinfla, I take it you think those reports are faulty, but the reason I added another sentence after the sentence you quoted was to ask the people who had read them to explain why they were faulty. I think Arny had read them. In case he wants to elaborate I would be very grateful :-). If not, I understand.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2017-01-07 00:23:24
Minimum phase filters where always there in cd players back to 1984 but no one heard it ;)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-07 00:45:26
Ajinfla, I take it you think those reports are faulty, but the reason I added another sentence after the sentence you quoted was to ask the people who had read them to explain why they were faulty.
I did.
Now perhaps you can specify which cherry picked "55 % to 74 % correct" results you want discussed....again, relevant to my explanation.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-07 20:44:24
Ajinfla, I take it you think those reports are faulty, but the reason I added another sentence after the sentence you quoted was to ask the people who had read them to explain why they were faulty.
I did.
Now perhaps you can specify which cherry picked "55 % to 74 % correct" results you want discussed....again, relevant to my explanation.


Well, all of them actually :-). But if that's a bit much to ask, maybe the ones with 61-74 % correct answers. I figured those would be the ones least prone to be dubbed "chance". Also, especially the Mizumachi study from 2015 if possible.
If any of you have the time to quickly sum up the faults of the rest that would also be greatly appreciated as well. If not, I understand :-). Then I'll see if I can get access to them myself somehow, although I might not be able to tell how they're faulty.
By the way: About that study that I linked to futher up, between 44.1 and 88.2 kHz sample rates: They said "these people answered wrong too often to be chance, so therefore they could hear a difference." In one of the very first ABX tests I did I got 2 out of 8 correct. So, I could "hear the difference" according to them (?), but I really could hear NO difference back then, and I just guessed. So...
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-07 22:16:05
Perhaps also the Jackson paper from 2016, if possible.
Thanks :-)!
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-07 23:50:18
Well, all of them actually :-). But if that's a bit much to ask, maybe the ones with 61-74 % correct answers. I figured those would be the ones least prone to be dubbed "chance". Also, especially the Mizumachi study from 2015 if possible.
Why did they have to install a "custom" supertweeter, made by a manufacturer for the study, in the cars? What was the XO frequency used for ST?
Why could they distinguish between the "Hi Re$" and Redbook, but not MP3?
What your real interest "Board"?
What would any of it have to do with what I spelled out previously about >"CD quality"?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-09 22:17:54
Well, all of them actually :-). But if that's a bit much to ask, maybe the ones with 61-74 % correct answers. I figured those would be the ones least prone to be dubbed "chance". Also, especially the Mizumachi study from 2015 if possible.
Why did they have to install a "custom" supertweeter, made by a manufacturer for the study, in the cars? What was the XO frequency used for ST?
Why could they distinguish between the "Hi Re$" and Redbook, but not MP3?
What your real interest "Board"?
What would any of it have to do with what I spelled out previously about >"CD quality"?
I don't understand what you're asking me...?
I'm just curious as to why the papers are not considered credible - e.g. volume levels were not matched, the conversion from hi-res to CD quality was not done right, etc.
As I mentioned, the study comparing 44.1. to 88.2 seemed obviously faulty to me when they say that three people who consistently score poor are "expert listeners who could hear a difference, but just pressed the wrong button". As I'm not able to read the other papers it would be great to know why they're faulty, but I understand that all of you have other things to do, so it's only if you can find time to summarize the points for the papers. But it would be greatly appreciated :-).
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-09 23:49:43
I don't understand what you're asking me...?
Fishing questions. You know.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2017-01-10 03:19:13
Perhaps also the Jackson paper from 2016, if possible.
Thanks :-)!

Please consult the relevant threads here that were posted at the time. It is easier than having us all rconstruct them for your benefit just because you don't want to do a few minutes research.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Kees de Visser on 2017-01-10 14:31:27
Would anyone here be interested in another attempt for a hi-res vs lo-res ABX test?
We have a few orchestral recordings scheduled in the coming weeks and I might (no promises) be able to find a few hours between sessions to do some listening tests. There are a few colleagues who think they can hear hi-res benefits, so that's a good start.
I would need some help though to set up a proper test. Any volunteers ?
The recording format is multichannel DXD (24/352.8 ) and I'm not sure if there are any ABX applications (Windows or Mac) that can handle that.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Roseval on 2017-01-10 15:50:59
Kees

If you do (I hope so) I suggest you open a separate topic otherwise is will probably be lost in this 373 (sorry 374) long topic.
I'm willing to volunteer but a bit at loss how I can be of help.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-11 19:02:45
Perhaps also the Jackson paper from 2016, if possible.
Thanks :-)!

Please consult the relevant threads here that were posted at the time. It is easier than having us all rconstruct them for your benefit just because you don't want to do a few minutes research.

Fair enough :-).
However, I just spent some time doing that now and didn't get much the wiser. Theiss' paper is the one with the highest score, yet I couldn't find any info on it on this website, except that the name was mentioned twice in the same post in this very thread. The two Jackson papers I couldn't find much info on HA about either. They were discussed quickly in this topic as well, although I think it might have been one of those two papers where you mentioened that the transition band was too narrow.
Before searching, I had an impression my search would be fairly fruitless (maybe I searched recently and forgot these were the results), but I also had the impression that you, Arny, read most of the papers mentioned in the meta-analysis after the meta-analysis was published, as the only person here on HA.
But I understand that what I'm asking is a lot. I just hoped that you, Arny, could recap in a few words why each of the high-scoring papers were unreliable, as I think this might be useful for several of us - not only me right now, then also others who will look up the matter in the future.
But if you decline I understand :-).
Thanks anyway.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-11 19:18:35
Theiss' paper is the one with the highest score
Of what?

The two Jackson papers
Of what relevance?
Scores?
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-11 19:42:53
Theiss' paper is the one with the highest score
Of what?

The two Jackson papers
Of what relevance?
Scores?

I see now that Arny mentioned earlier that Theiss' paper is not musical content but impulses, although it did mention a symphony. Anyway, it said 74.51 % correct answers in differentiating hi-res from CD quality.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-11 21:54:31
Anyway, it said 74.51 % correct answers in differentiating hi-res from CD quality.
Nope, not what Theis Hawksford paper said.

But since you like keeping score:

Quote
REISS
In summary.... the causes are still unknown
Score: Zero
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-12 16:41:24
Anyway, it said 74.51 % correct answers in differentiating hi-res from CD quality.
Nope, not what Theis Hawksford paper said.

But since you like keeping score:

Quote
REISS
In summary.... the causes are still unknown
Score: Zero

Okay, fair enough. As I'm not able to read the Theiss paper, but only the meta-analysis, which I've only had a quick glance at, then it says 74.51 % in the table on page 5 in the meta-analysis. Anyway, I'll have a more thorough look at the meta analysis.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-12 18:15:41
it says 74.51 % in the table on page 5
Yep. With 4 question mark columns.

And "supertweeters" again being "added".
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: board on 2017-01-12 18:35:45
But I suppose there's nothing wrong with adding supertweeters. After all, some speakers are constructed like that, although I am aware that they are rare (my first "serious" stereo system, a Sony midi system in the mid 90s, had super tweeters), and to hear any supersonic content, if there is any to be heard, which I'm still skeptical of, then obviously we need equipment that's able to play it.
As for the question marks, almost all of the studies have some of those, although, obviously, it would be best if they didn't have any. The Theiss paper is after all deemed "neutral", but...
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-12 19:14:32
I suppose there's nothing wrong with adding supertweeters.
Supposition isn't science.

As for the question marks
They need answers, before "scoring".

Quote
REISS

In summary.....the causes are still unknown
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Joe Bloggs on 2017-01-13 21:45:42
I thought it was common agreement here that supersonics should be handled by separate [call them supertweeters if you may] so that IMD does not creep into audible frequencies?

But yes, if the meta-analysis itself put so many question marks on the study...
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-01-14 00:32:57
I thought it was common agreement here that supersonics should be handled by separate [call them supertweeters if you may] so that IMD does not creep into audible frequencies?
Not that I'm aware of.
None of the "add on" ST studies seem to mention any specifics of filtering (or IMD).
Plenty "tweeters" capable of >20Khz response...ala the BS study using a SEAS in the Meridian speaker.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Porcus on 2017-01-14 01:30:01
I wonder how many times I have seen "BS" here, only to ask myself whether it's supposed to read as "B for bull" or "S for Stuart".
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2017-01-14 09:02:02
Correct.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Wombat on 2017-01-18 17:44:21
I wonder how many times I have seen "BS" here, only to ask myself whether it's supposed to read as "B for bull" or "S for Stuart".
We should talk about MQA Bob in the future to prevent confusion. We also have a typical hand movement:
(http://www.stereophile.com/images/styles/600_wide/public/102916-Stuart3-600.jpg)
(http://pm1.narvii.com/5829/394a202419e02df1beeaadc9a5735499beeeccde_hq.jpg)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2017-10-12 23:09:42
AJ, Reiss has replied to your AES comment


https://secure.aes.org/forum/pubs/journal/?ID=591
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2017-10-12 23:27:17
AJ, Reiss has replied to your AES comment


https://secure.aes.org/forum/pubs/journal/?ID=591

and rejected  mine without comment or notification.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: ajinfla on 2017-10-13 01:12:56
Well. unless I can't see it, his only response to my question, was the next day....in 2016.
Even if it in no way addressed what I actually asked.
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: krabapple on 2017-10-13 05:13:22
Oh, I didn't know you'd seen it already (2016 reply)
Title: Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati
Post by: Arnold B. Krueger on 2017-10-13 12:11:32
Well. unless I can't see it, his only response to my question, was the next day....in 2016.
Even if it in no way addressed what I actually asked.

 I think that Dolby's strangle hold on AES is now so strong that the resident golden ears can just quash people who see through their hype and trickery.  They used to talk around probing questions, but power is power.