Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluation (Read 96185 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #325
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this? There is no reason for a narrow transition band to affect the pass band like you report. Might it be that your filter design is less than competent?

Sounds like a bad filter to me too. E.g. commonly-used filter design methods include: windowed-sinc, where passband ripple is largely independent of transition band width (in fact, it closely tracks the stopband ripple); and Parks-McClellan, where passband ripple can be set as small (or large) as you like, independently of both stopband ripple and transition band width.

E.g., passband ripple of SSRC (very narrow transition band):

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #326
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this?

While I have the results that I posted, I'm not satisfied that they are the best I can do with a reasonable effort, so I'm teaching myself Octave, which seems to have more the tools that I need. In particular  it has tools for designing and simulating a wider variety of digital filters, including some that seem to be exactly like the digital filters in DACs, ADCs, and sample rate converters, whether software or hardware. So, I labelled my results "preliminary"

What I saw in the tests I did is that simulating filters with extremely narrow transition bands created large and broad dips and peaks at frequencies that were well below the design frequencies of the filters. The filters I was working with seemed well-behaved as long as I used them in ways that seemed to be typical to me.

Quote
There is no reason for a narrow transition band to affect the pass band like you report.

That's what I thought at first, but I tried my tests with a variety of filters from a variety of sources, not just one filter from one implementer of filters.

Quote
Might it be that your filter design is less than competent?

The filters behaved well with typical sets of parameters. IOW with transition bands in filters operating around 20 KHz, that were at least several 100 Hz wide. Trouble is, one of the papers talked about training listeners with filters that were a small number of Hz wide.

It's not unusual for well-behaved designs to become unstable when operated with extreme sets of parameters. I've seen this happen before, and not just with filters. I have found that every good design methodology has a natural range of effective performance, but you can stretch things just too far.  This particular set of experimenters don't seem to worried about such mundane things, in many areas, not just filters. Since they were working with abstractions, they could easily ignore whatever didn't interest them.  For example they could have published typical response curve for all the modes they used the filters in and provided exact filter characteristics and designs, but they didn't.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #327
The consequence of a narrower transition bands is the generation of excessive and unnatural artifacts including variations in the far more audible bandpass (LF) region, and ringing at or near the Nyquist frequency (22 Kz).... my preliminary tests show that digital filters get really squirrely with transition bands this excessively narrow.  The primary artifacts turn out to be surprisingly broad  peaks and dips on the order of 2-5 dB in the bandpass region, which is to say the normal audio band going down to 1 KHz and below. 

Could you elaborate on this? There is no reason for a narrow transition band to affect the pass band like you report. Might it be that your filter design is less than competent?

Sounds like a bad filter to me too. E.g. commonly-used filter design methods include: windowed-sinc, where passband ripple is largely independent of transition band width (in fact, it closely tracks the stopband ripple); and Parks-McClellan, where passband ripple can be set as small (or large) as you like, independently of both stopband ripple and transition band width.

E.g., passband ripple of SSRC (very narrow transition band):


Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?


Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #329
Actually I understand transition band, ADCs, DACs and SRCs quite well. Quiz me.
Ok. What are you concerns with the "typical" CDs you buy?
Fail

I observe that this person does not in general answer reasonable questions that other people ask.  You don't really discuss issues with him, instead he tries to keep you busy dealing with his insulting and imaginative false claims.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #330
Quote from: Arnold B. Krueger date=1472821879
Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?
The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #331
Quote from: Arnold B. Krueger date=1472821879
Interesting results. Since SSRC is PD, this should be easy to duplicate were you to provide all of the relevant parameters. Or is there some reason why they need to remain secret?
The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).


The first post said it was from SSRC, but now that I ask for details, they are not forthcoming?

I'm  looking for factual evidence, not mere speculation about nameless SRC software.

Is this from Bob Barker or the Meridian gang under a different alias?

Besides, there's more than frequency response to filters. There could be other things such as phase response that make this currently nameless filter audibly flawed.  Next time, no mystery meat, please?

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #332
Quote from: bobbaker
I buy typical CDs, not fastidiously produced ones. I’m speaking to my interest, not anything stated by others.

e for me includes what is relevant to buying music produced with imperfect choices and mistakes that human sound engineers would likely make. If typical “hi-res” is audibly better than “typical” CDs, that may be important to some. It would be important to me because I’m interested, but on its own, it would not make my buying decisions. In combination with, say, price, it could.

But there's the rub.  What if 'typical' hi rez (or at least, the hi rez used to establish its superiority to CD) is more 'fastidiously produced'?

Is the difference really down to 'hi rez vs CD' formats in that case...or simply, production practices?

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #333
no mystery meat, please?
Mystery meat?  How a long-time forum member can't know anything about SSRC is the true mystery.

A simple google search would have gotten you what you needed, but at any rate...
https://github.com/shibatch/SSRC

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #334
Let's switch tack. Arny, here's some Octave code that I think does what you want:
Code: [Select]
pkg load signal

% Design params:
fn=48000  % Nyquist freq. (Hz)
fc=22050  % Corner freq. (Hz)
tbw=20    % Transition band width (Hz)
attn=100  % Stopband attenuation (dB)

% Make filter:
d=10^(-attn/20)
[n, w, beta, ftype] = kaiserord ([fc-tbw/2, fc+tbw/2], [1, 0], [d d], fn*2);
b = fir1 (n, w, kaiser (n+1, beta), ftype, "noscale");

% Plot magnitude response:
[h f] = freqz(b,1,2^18); plot(f/pi*fn, 20*log10(abs(h))); grid; pause

% Zoom on pass band:
axis([0 1.1*fc -.1 .1]); pause

% Zoom on transition band:
axis([fc-50 fc+50 -12*beta 10]); pause
It implements a 96000 -> 44100 brick-wall decimation filter; passband ripple is negligible.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #335

The graph above is from the src comparison site, but there's nothing special about SSRC w.r.t. it's filter design: it's simply kaiser-windowed sinc (same for sox and libsamplerate, btw).

I tested SRC  with default parameters specifying only that the 96 KHz input file be downsampled to 44,100 Hz.

The transition band as tested with 96 KHz sample rate files containing in one case multitones (on 100 Hz centers) and in the other a 96 KHz  sample rate Swish, and found the transition band to be about 900 Hz wide. The ripple was indeed very low, but the transition band width was at the lower end of the normal range.

Next: The octave file you kindly provided.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #336
no mystery meat, please?
Mystery meat?  How a long-time forum member can't know anything about SSRC is the true mystery.

Who might that be?

Couldn't be me, because I said it was PD and easy to download. What I didn't say but could have, is that I had used it in the past, when my information needs were not as detailed.

The mystery was the details of the test results.  The plot provided lacked the detail  required to accurately determine the transition band.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #337
It might have been you, but you've made it clear that I was mistaken.  Still, please don't be shocked when your theory about digital filters with narrow transition bands falls apart.

Now, maybe you can tell me how any of this belongs in a topic about Reiss's meta analysis.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #338
It might have been you, but you've made it clear that I was mistaken.  Still, please don't be shocked when your theory about digital filters with narrow transition bands falls apart.

I'm surprised that I have to remind an omniscient being about how Science works. You develop hypotheses, you do experiments, you make observations, you critique your work, and when you think you have some kind of final answer you present it as such, and until then you label it "preliminary".  If the final evidence supports the hypothesis then that is one possible outcome that you rejoice in it because you learned something, and if the final evidence does not support or even  contradicts the hypothesis, you rejoice in it because you learned something.'

Phases like "fall apart" suggest to me some kind of emotional attachment to one outcome or the other. Who are you talking about?

Quote
Now, maybe you can tell me how any of this belongs in a topic about Reiss's meta analysis.

Again, that this is phrased as a question, suggests a failure to be omniscient.  

One very important characteristic of a meta analysis is that the studies that made up the meta analysis are more than a tiny bit related to each other. 

In the midst of this apparent failure of omniscience, I'm forced to remind people that a study was done of the program material that was used in the various studies that were Reiss's 20-odd final choices in his meta analysis, and they were actually quite varied and in fundamental ways. All but one were so poorly described and documented in the study that recreating it by an independent worker would be impossible.  IMO not good candidates for merging together in a meta-analysis.

Another issue that is fundamental to this group of studies was the nature of the narrow-band version of the signal that was compared to the wide-band so-called high resolution form of the same basic audio signal. This signal has at least three important characteristics, namely the bandpass high frequency limit, the width of the transition band, and the low limit of the presumably (for any format that pretends to be sonically transparent) ultrasonic stop band.  All three of these pieces of data can be accurately deduced from the other two, which is important because often only two of them are well-documented.

If these studies varied significantly in terms of the basic nature of the so-called low resolution signal, then they are again poor choices for a meta-analysis.  The word significantly is logically linked to audibility, so this criticism of Reiss's alleged meta-study would be related on whether the low pass filters that were used in the component studies were similar enough to be lumped together in the meta-study.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #339
One possible consequence of a steep filter is higher intersample peaks. The paper suggests not all audio players have or have enough headroom to deal with it.
https://service.tcgroup.tc/media/Level_paper_AES109(1).pdf

Some DAC makers like Benchmark Media aware of the issue as shown in this thread.
https://hydrogenaud.io/index.php/topic,98753.0.html

Some users believe that they should not alter volume in digital domain and prefer to use an analog volume control without knowing or don't caring about intersample peaks.

dBTP in audio metering is introduced to deal with this issue but all I know about is 4x upsampling in 44/48k and 2x upsampling in 88/96k are used to estimate TP without knowing other details. Also the popularity of dBTP metering in audio industry is unknown to me.

Of course I can always use ABX to test it myself but I would like to know if there any studies or papers about the audibility of such issues or not.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #340
If resampling has inter-sample overs the source already should contain inter-sample overs. Distributed music may as last step have a slight volume drop to make it non obvious to Audacity cowboys. It was often talked about it but i still have not a single sample that sounds clipped from the resampling process.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #341
Ah, but there's an important "advantage," especially for trained listeners allowing them to do slightly better in distinguishing a difference than flipping a coin.  The apparent unknown cause for this difference isn't supposed to matter.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #342
In case SoundAndMotion, SoundAndMotion2, and Jakob1863 are still following along...
https://hydrogenaud.io/index.php/topic,107124.msg883558.html#msg883558

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #343
Hi all

I’m a newbie here so please be gentle.  I have followed this thread with interest for nearly a year.  As I understand it, a lot of the discussion here spawns from the meta-analysis study which may be flawed due to the inclusion of several studies which have shown to be lacking in methodological rigour or those which have later been refuted.

There is a lot of discussion about potential effects of different ADCs and DACs, resampling and so on.  I’m no scientist or audio engineer but what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

The reason I ask is that I thought the whole debate about hi res vs CD quality was resolved back in 2007 with the Moran and Myer study in the link below, noting that this study was built upon several studies before it which established the transparency of the CD. 

Where is the critique of the Myer and Moran study which led to an arguably inferior meta-analysis study we are discussing here?  I would have thought that a year- long multi trial study using trained musicians, recording producers, audiophiles as well as the average Joe Blo, some using their own music in their own homes on their own equipment would be definitive.

Surely Meridian or another body disagrees with the 2007 paper they could spell out why and then design another similar year long, multi-trial, multi subjects experiment to back their cause?

BTW, this is not a rhetorical question, it is something that I genuinely find puzzling.

http://drewdaniels.com/audible.pdf

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #344
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

Fear.
The most important audio cables are the ones in the brain

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #345
  As I understand it, a lot of the (recent) discussion here spawns from the meta-analysis study which may be flawed due to the inclusion of several studies which have shown to be lacking in methodological rigour or those which have later been refuted.

The problem with the meta analysis is far more basic than that.  The meta analysis itself involves 20-ih different attempts (since 1980, most after Y2K)  to show an audible benefit to so-called high resolution audio.  The vast majority of them failed to obtain results that were significantly positive for their thesis. The author took advantage of the well known fact that you can combine the results of individually failed tests to create the appearance of significant results. However combining tests is valid only if the individual tests are themselves valid, and also very similar to each other. Examination of the papers describing the tests shows that this is not the case. So, the Meta analysis itself failed during the preliminary selection of tests, and even before the detailed analysis was started.

A second problem is that the author seems to have based his claims about the significance and relevance of his results quite heavily on statistical significance.  As pointed out above, impressive numbers for statistical significance can  fabricate  by simply doing an experiment whose results are largely random a very large number of times.  An experiment with > 150 trials with 95% Statistical Significance  need have less than 60% correct answers, which is rather close to 50% correct answers which is what you get if the listeners in an A/B test are guessing purely randomly.  I don't think that many consumers are going to do > 100 listening trials to obtain "Evidence of their ears" that something sounds better.


Quote
There is a lot of discussion about potential effects of different ADCs and DACs, resampling and so on.

General consensus: They sound so similar that they don't matter.

Quote
  I’m no scientist or audio engineer but what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?

Now, each of those 20-ish tests was no doubt a test that its proponents thought would be the be-all end-all  definitive test that would finally  settle the question affirmatively and forever, but that is  not how the data worked out. Therefore, we have something like 20 years of history of more than 20 tests whose results strongly suggest that the next test or 5 tests or 10 tests will not finally settle the question, either.

Quote
The reason I ask is that I thought the whole debate about hi res vs CD quality was resolved back in 2007 with the Moran and Myer study in the link below, noting that this study was built upon several studies before it which established the transparency of the CD. 

Where is the critique of the Myer and Moran study which led to an arguably inferior meta-analysis study we are discussing here?  I would have thought that a year- long multi trial study using trained musicians, recording producers, audiophiles as well as the average Joe Blo, some using their own music in their own homes on their own equipment would be definitive.

Surely Meridian or another body disagrees with the 2007 paper they could spell out why and then design another similar year long, multi-trial, multi subjects experiment to back their cause?

BTW, this is not a rhetorical question, it is something that I genuinely find puzzling.

http://drewdaniels.com/audible.pdf


I believe you can download an AES paper that contains a critique of Meyer and Moran's study Here:

http://www.aes.org/e-lib/browse.cfm?elib=18296

The paper is claimed to be "open access" which means that you should be able to download it for free.

It is as follows:

"
2.2 Meyer 2007 Revisited

Meyer 2007 deserves special attention, since it is
well-known and has the most participants of any study, but
could only be included in some of the meta-analysis in Sec.
3 due to lack of data availability. This study reported that
listeners could not detect a difference between an SACD or
DVD-A recording and that same recording when converted
to CD quality. However, their results have been disputed,
both in online forums (www.avsforum.com,
www.sa-cd.net, www.hydrogenaud.io and secure.aes.org/forum/pubs/journal/)
and in research
publications [11, 76].
First, much of the high-resolution stimuli may not have
actually contained high-resolution content for three reasons;
the encoding scheme on SACD obscures frequency
components above 20 kHz and the SACD players typically
filter above 30 or 50 kHz, the mastering on both the
DVD-A and SACD content may have applied additional
low pass filters, and the source material may not all have
been originally recorded in high resolution. Second, their
experimental set-up was not well-described, so it is possible
that high resolution content was not presented to the
listener even when it was available. However, their experiment
was intended to be close to a typical listening experience
on a home entertainment system, and one could argue
that these same issues may be present in such conditions.
Third, their experiment was not controlled. Test subjects
performed variable numbers of trials, with varying equipment,
and usually (but not always) without training. Trials
were not randomized, in the sense that A was always the
DVD-A/SACD and B was always CD. And A was on the
left and B on the right, which introduces an additional issue
that if the content was panned slightly off-center, it might
bias the choice of A and B.

Meyer and Moran responded to such issues by stating
[76], “... there are issues with their statistical independence,
as well as other problems with the data. We did
not set out to do a rigorous statistical study, nor did we
claim to have done so. ...” But all of these conditions...
"

(please see linked article for the rest of this critique)

The biggest problem with the Meyer and Moran paper is that it presumed that the recording industry is honest and transparent and can be taken at their word when they advertise a product as being high resolution.  It was later discovered that the record industry had misrepresented  as being high resolution, products that had uncorrectable low resolution provenance.  Since they did not individually qualify the actual content of each recording by reliable means, it is probable that on the order of 50% of the recordings their study was based on were actually CD-quality or worse.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #346
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?
Fear.


OK, so it was a "clever" single-word answer.  Whilst maybe, just maybe, those guys do not want to take the risk, I think it is more that they have totally bought into their delusions. They have no room for doubt. And if they fail a test, it was the test that was wrong.
The most important audio cables are the ones in the brain


Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #348
what I do not understand is why doesn’t Meridian or a well-resourced independent body (eg the AES) fund and construct another proper double blind listening test to settle this once and for all?
Fear.


OK, so it was a "clever" single-word answer.  Whilst maybe, just maybe, those guys do not want to take the risk, I think it is more that they have totally bought into their delusions. They have no room for doubt. And if they fail a test, it was the test that was wrong.

I can understand that from the Meridian's perspective, but it seems strange that the AES, a University or some other independent body has not done so.  I would have thought from a science perspective there would be a lot of keen researchers jumping to do something like it.

Of course, there is an argument that there is nothing in digital audio technicalities or the science which would suggest that the difference between 16/44 and hi res would be audible to humans (assuming all variables other than the bit depth and sample rates are controlled) along with the Myer and Moran study and the studies preceding it having settled this issue to most reasonable minded people.

I know it doesn't matter how well digital theory is understood and how many well designed tests are conducted, it will still not convince those with strong beliefs or faith and those with a commercial agenda to believe or spread misinformation.  However, the (overstated) criticism of the Myer and Moran paper is that most of the listening material was not from a hi res master.  That is where the focus should be, ie replicating what was a well designed study but ensuring all source material is actually from a hi res source.

That of course glosses over the more important, indirect shadow test which was that for many years the general public have been purchasing and playing those SACDs and DVD-As and yet for many years not one golden eared reviewer or audiophile picked them out as not being hi res.  In the end it was measurements rather than listening tests which confirmed they were not from hi res masters.

Re: Next page in the hi-rez media scam: A Meta-Analysis of High Resolution Audio Perceptual Evaluati

Reply #349
I can understand that from the Meridian's perspective, but it seems strange that the AES, a University or some other independent body has not done so.  I would have thought from a science perspective there would be a lot of keen researchers jumping to do something like it.
There would have to be funding. Neither the AES nor other independent bodies have the funds and/or the motivation to make this happen. The AES could be expected to organize it, provided that the money comes from somewhere. The Meyer/Moran study is already exceptional in this regard, you can't expect this to happen easily.

Quote
I know it doesn't matter how well digital theory is understood and how many well designed tests are conducted, it will still not convince those with strong beliefs or faith and those with a commercial agenda to believe or spread misinformation.  However, the (overstated) criticism of the Myer and Moran paper is that most of the listening material was not from a hi res master.  That is where the focus should be, ie replicating what was a well designed study but ensuring all source material is actually from a hi res source.
The understanding of digital theory is one thing, but more important is the understanding of human hearing. Both are developed enough that there really shouldn't be a question about HRA being audible. This is also one of the reasons why there aren't more scientific studies. Amongst the more clued up scientists, there isn't much hope of finding anything that differs significantly from what we already know. The whole thing is wishful thinking on behalf of those who see a chance to make a buck.

The meta-analysis, and also the criticism of Meyer/Moran, only show that to be true. To criticise Meyer/Moran because they included material that was "not really" Hi-Res is quite hypocritical. There is still no clear definition of what Hi-Res means today, and that's 10 years after their study. On what grounds should they have drawn a line? They did the sensible thing: They took the material that was available as Hi-Res material, so they took what other people said was Hi-Res, thereby avoiding a decision that would have been controversial no matter how it went. Their result means that what the market presented to the consumer as being Hi-Res was indistinguishable from the same material converted down to 44.1/16. It shows convincingly that the Hi-Res market was a fraud 10 years ago. My opinion is that this is at least as true today as it was back then.