Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AES 2009 Audio Myths Workshop (Read 171306 times) previous topic - next topic
0 Members and 4 Guests are viewing this topic.

AES 2009 Audio Myths Workshop

Reply #275
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


Luckily very many people out there prefer acting on their own authority over that of a price tag. There are over 20000 hits that might hint that $99975 just don't make that much of a difference, when you don't know how to spend it wisely.

AES 2009 Audio Myths Workshop

Reply #276
The bottom line is that simple definitions of accuracy may themselves be inaccurate.

That is an extremely interesting statement. But where does it leave us?


Going around in circles?



In the midst of an exciting, evolving technology.


Two very good answers - which brings us to my point.

It is my contention that the state of the art of audio measurement and the state of the science of human audio perception at tis time are not accurate enough really adequately quantify what we're discussing.

I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well. The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.

In terms of audio perception some aspects are fairly well understood, but other aspects - specifically how the brain handles perceptual information and how the perceptual systems encode information for transmission from the primary sensory organs to the brain are currently the subject of some very interesting research.

I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)

Science is supposed to be based on observation. We observe the world around us. We study our observations. We construct hypotheses explaining our observations and compare them to the behavior of the world; when they appear to fit they become standard (more or less) theory. We conduct tests of the theory and, as our technology progresses enough to provide sufficiently accurate tests, we prove the theory and it becomes law, or, unproven, it remains theory until a better explanation comes along.

We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification. The Catholic Church tried that with Galileo.

A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.

(Please note that I am not endorsing ghosties, faeries, wizards, or  expensive sculptures that make your dentist's stereo sound better......)

AES 2009 Audio Myths Workshop

Reply #277
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


Luckily very many people out there prefer acting on their own authority over that of a price tag. There are over 20000 hits that might hint that $99975 just don't make that much of a difference, when you don't know how to spend it wisely.


20,000 hits? Are you saying there have been 20,000 hit records mixed on a Soundblaster? 
Or that there have been 20,000 hits on the website publishing the spec? 

Actually I only paid 5 grand for my Studer, but it did cost 100 grand new.

What's interesting about those specs is what they DON'T say. There is no spec for HD except at 1K. There are no distortion specs at all for any kind of distortion taken at lower levels than -3dBfs. There are other problems, but that's enough for a start.

I will say that those specs do look very good on paper at first glance.

AES 2009 Audio Myths Workshop

Reply #278
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)
 
Why would you expect ABX testing to give analogous results to situations that don't control any number of a multitude of factors?
elevatorladylevitateme

AES 2009 Audio Myths Workshop

Reply #279
...When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit)

Those sound "engineers" are not real engineers. They are not likely to have engineering training, otherwise they would trust ABX instead of their gut feelings.

...that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations...

That is the same as saying the world is flat.


 

AES 2009 Audio Myths Workshop

Reply #280
I think that in some ways we're attempting to do the equivalent of brain surgery with a dull pocketknife. When people using a conventional testing methodology such as ABX say that they can find no perceptual difference between an inexpensive consumer quality converter and a $10,000 mastering converter but an overwhelming majority of professional engineers can pick out work done with each (and unanimously prefer the professional unit) that tells me that there's something wrong with the testing methodology. (I'm not saying that ABX is an invalid or unuseful tool, I'm just saying that it has limitations.)

When they make such claims without an ABX test to me it would seem the "engineers" are talking out of their arse. If the difference is so obvious it would be no problem to do it double blinded no? The limitations of an ABX seems to just be that it doesn't give "engineers"  and their following their special feeling of being better than the rest.

Quote
We do not throw out observation simply because it doesn't agree with conventional wisdom, especially when conventional wisdom is to a large degree based on simplification.

We do this all this time, it's called placebo affect. Not all observations are equal.

Quote
The Catholic Church tried that with Galileo.

That's funny from someone who falsely started to preach about the limitations about ABX. Make no mistake about it, you are the church here.

Quote
A real scientist tries to find out those things that he doesn't know. He doesn't simply point to the establish body of knowledge and treat it like scripture. That type of person is a pedant, not a scientist.

I think you have shown here you don't know the first thing about scientific method.
"We cannot win against obsession. They care, we don't. They win."

AES 2009 Audio Myths Workshop

Reply #281
Ethan, I am "glad of heart" to see you pulling back your statements into more tightly scoped contexts, where they belong.  Thank you!

Thank you Ethan for moderating your position.  I didn't expect you would, but I"m happily pleased that you have.


dwoz, thank you for complimenting Ethan on something he did not do (nor need to do).  <-- See how a not-so-cleverly-disquised compliment works?


JJ,

Reading over David's posts, I just thought of something.  I have been in the small backroom of a music store testing out a speaker system where at 2 on the amp I could get hearing damage.  But when using that system outdoors as the PA for a band, 10 on the amp was no where near loud enough.. simply not enough amp + speaker power for outdoors.  However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?

AES 2009 Audio Myths Workshop

Reply #282
Quote
BTW, did you recap the power supply and do the bias adjustment on the Threshold? Because if you didn't you didn't give the amp a fair evaluation.
That would probably be a TOS 8 violation.
Perhaps not if the measured difference signal >-120dBFS ?

AES 2009 Audio Myths Workshop

Reply #283
However, would the distortions introduced in lossy codecs be more evident in this type of situation? Play a 128kbps mp3 file through that system turned up to 10 outdoors, would you hear the lossy distortions at a reasonable distance from the speakers?
In the end it's the SPL at the ear that could make the difference, not necessarily the power of the PA system. You can also go pretty loud with headphones.
I expect more effect from the room acoustics (reverberation) on the masking properties of the codec. It should be possible to simulate that at home with a reverberation (e.g. convolution) plugin.

AES 2009 Audio Myths Workshop

Reply #284
I've snipped the deeply embedded quotes - you'll have to read back to get context. Basically Arny claiming that lossy codecs pass all traditional measurements, me saying they don't, but it's a silly argument because Arny hasn't defined what "traditional measurements" he's talking about, then...

I have. Two words: Audio Rightmark.

Audio Rightmark certainly does include measurements that reveal the noise/distortion introduced by mp3 encoders.

See here for mp3 vs wav:

http://www.jensign.com/RMAA/ZenXtra/Comparison.htm

Scroll down to THD and IMD graphs - quite revealing.


Revaling of what?

I see nothiing that worries me.

You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.

Now, I know that junk 50dB down can be masked, but that's not the point. In your statement, either you are defining a new threshold - i.e. that we can safely ignore anything that's more than 50dB down - or what you're saying is "based on my knowledge of psychoacoustics, the junk I see 50dB down is probably inaudible". So you're not relying solely on Rightmark - you're relying on Rightmark plus the new patent Arnold Krueger psychoacoustic model.

tl;dr - your claim doesn't add up.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #285
How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?
How much dithering is added once the signal is digizited?
1-bit RMS after most signal transforms - I'm sure you don't need to ask this.

Quote
I didn't say A/D system, I said digital system. Yes there are types of discretionary processing that when done in the digital domain may require additional randomization of quantization errors, but for many very useful situations such as transmission and storage of data, no dither is added in the digital domain.
Transmission and storage? No digital volume control then. Or EQ. Or even mixing.

Really - you make some very silly arguments. All to avoid admitting your were wrong.

Though sometimes I feel sure you must know you're posting something wrong, or at least something technically correct but intentionally misleading or incomplete, and do it anyway - just to have a nice argument? I don't know - but it's not helpful.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #286
...snip stuff about valves/tubes...

The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes...

Thanks John.

My point was that these measurements or characteristics are supposed to be defining audio equipment - or at least defining a threshold beyond which we can be sure it's transparent.

For the purposes of this discussion, I don't care whether it's because the tubes are broken, or they all do that - I care about whether the defined measurements catch this fault.

Since no one has been brave enough to properly define the measurements yet, we can't be sure.

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #287
the metal in the valves sings along with the music ... tap the valves, you can hear the tapping through the speaker.

I suppose you could call that reverb, but I call it ringing because it has a single dominant frequency. Either way, it's one very good reason to avoid tubes in all audio gear.

Quote
I hope you don't think I'm being too harsh, but this renders the whole exercise a bit meaningless for me. It's turning from "this characterises any audio component" to "this characterises any audio component, except the ones it doesn't". There's a problem: who is to decide which ones it doesn't characterise?

It's not meaningless IMO. The whole point of the "four parameters" is to define what affects audio reproduction. This word is in the title of my AES Workshop, and it's also clear in the script which I uploaded the other day and linked to in an earlier post in this thread. The script is HERE, and the exact wording is:

Quote
The following four parameters define everything needed to assess high quality audio reproduction:

Defining what affects audio reproduction has always been the entire point of my four parameters. I go out of my way to explain in forums (again and again and again) that I don't include intentional "euphonic" distortion in the list because that's a creative tool. As is reverb.

This is all fine and good - I have no problem with this (I think maybe other people do).

You are looking at equipment which aims to be transparent.

Equipment which aims to change the signal is outside the scope of the discussion.

Fine.

My point is really simple:

For your measurements (and the associated pass/fail thresholds) to be believable and useful, they need to be able to raise a red flag which says "this is not transparent" if something isn't transparent. In this context, we should be able to measure anything - and always get that red flag if its appropriate.

If that's not the case, your measurements don't define transparency in the way that you claim.

If there's a class of audio component - and I mean anything - which your measurements say is "transparent", but can be ABX'd, then your measurement suit is incomplete and/or you've got the wrong measurements. IMO!



Quote
This is why some people get so upset when I claim that a $25 SoundBlaster card has higher fidelity than the finest analog tape recorder. They immediately see red, and go on about how people prefer the sound of analog tape. And tubes. And hardware having transformers. And all the rest. But subjective preference was never my point or my intent.

Even without the possible subjective preference for distortion, I think it's a harder problem to take two sets of measurements, and say definitely "X sounds better than Y" - unless you have the simple case where X has identical faults of Y, but of half the magnitude (for example). In the general case, where you have different measurements, it's a multi-dimensional problem, and predicting whether a certain amount of fault type A is more objectionable to human ears than a different amount of unrelated fault type B is a really hard problem.*

So let's not even try to solve that "which is better" problem just yet - let's take baby steps first: define a test which can be applied to any piece of audio equipment, whereby if it passes that test, it's transparent. If it fails the test, it may be non-transparent.**

I initially thought that's what you were doing - I now realise it's not. But I think it would be a great thing to do.

Cheers,
David.

P.S.

* - Put psychoacoustics into the equipment you're measuring, and it simply becomes a battle of which has the more human-like model: the equipment under test, or the measuring equipment. To get a trustworthy judgement of which is better, you have to resort to human ears.
** - I think (Arny disagrees) that anything psychoacoustic based will always be judged as "non transparent" by this hypothetical set of measurements - but I think that's OK for now - better to err on the side of caution for now, than to start with something that can make both false positives and false negatives.

AES 2009 Audio Myths Workshop

Reply #288
You are looking at equipment which aims to be transparent.

Equipment which aims to change the signal is outside the scope of the discussion.

Fine.

My point is really simple:

For your measurements (and the associated pass/fail thresholds) to be believable and useful, they need to be able to raise a red flag which says "this is not transparent" if something isn't transparent. In this context, we should be able to measure anything - and always get that red flag if its appropriate.

If that's not the case, your measurements don't define transparency in the way that you claim.


The key phrase is "in the way that you claim".  Ethan's claims were made in the context of a discussion of conventional audio production components - mic preamps, audio interfaces, etc. Taking claims outisde of their stated domain is the *first* on Schoepenhauers list of 38-odd ways to win arguments by deceptive means. 38 ways to  win arguments by cheating. This list is over 100 years old!

If you consider the recent results of Rightmark testing of MP3 cders,you will see how it works in that way.

While I have no problems with the results after additional analysis, there were a few red flags contained therein. There were more than a few things that screamed "Not your father's McIntosh amplifier". ;-)

I gave the test results a "Perceptual coder provisional".

Now some may have not seen the red flags, but then most of you never tested your father's McIntosh amplifier!

The caveat which the knowlegable people who post here know is that the only reliable way to test coders to this day is reliable subjective testing. 

If you throw the Rightmark tests out for that reason, then you are missing the point. *Every* conventional component in a record/play chain is a proper target for Rightmark testing. Some want to say, that Rightmark is no good because they can't figure out whether a component is conventional or not. I say that if you can't tell a conventional component (e.g. power amp or mic preamp) from an unconventional component (coder or decoder for perceptual data stream) then you don't belong in the conversation. If you want to take your ball and bat and go home, then do so. Otherwise, Man up!

As far as pass/fail conditions go, it appears that the initial interpretation of the Rightmark results for the coders was "fail".  This seems appropriate. Had it been a McIntosh amplifier or a computer audio interface, it would have been a fail. And the initial domain of Ethan's presentation was conventional audio components.

I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)



AES 2009 Audio Myths Workshop

Reply #289
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
No, the "no-brainer" test will, if passed, guarantee that something is transparent.

The RMAA stuff is a very good starting point.

Now, is anyone going to actually write the tests out properly?

i.e. For each one, list the test stimulus, method of analysis, and pass/fail criteria.

I assume they're all there in the program itself, but they're not in the manual - are they somewhere which could be easily copied/pasted into this discussion?

Cheers,
David.

AES 2009 Audio Myths Workshop

Reply #290
How can you say there is "zero added noise" when the dithering process deliberately adds a specific amount of noise?
How much dithering is added once the signal is digizited?
1-bit RMS after most signal transforms - I'm sure you don't need to ask this.

Quote
I didn't say A/D system, I said digital system. Yes there are types of discretionary processing that when done in the digital domain may require additional randomization of quantization errors, but for many very useful situations such as transmission and storage of data, no dither is added in the digital domain.


Transmission and storage? No digital volume control then. Or EQ. Or even mixing.


So what?

You can make digital eq and mixing as good as you want by using longer data words. You could never do that with analog.

Furthermore, digital eq and mixing were not practical and generally available until from one to two decades or two *after* there was a CD player on just every block in the US. We did digital audio on a day-to-day basis and enjoyed great advantages because of it for decades, without either dgital mixers or eq. We still did those things in the analog domain and were intensely content.

So you may want to make totems out of digital eq and consoles, but to me they are just frosting on the digital cake. The important sound quality advantages came out of digital transmission and storage. The irony is that advances in digital transmission and storage make percptual coding almost moot.

Quote
Really - you make some very silly arguments. All to avoid admitting your were wrong.


You are claiming to be able to read my mind.

In the face of a person who is as skilled at reliable mind reading as you seem to want to pretend to be, I wonder why I am so stupid as to even post on HA just once!

The real truth is that I'm biting my tongue red and bloody in the face of some incredibly obtuse talk.

There's an old saying that to a sufficiently uneducated mind, modern technology appears to be magic. A corolary seems to be that to  a sufficiently uneducated mind, modern technology appears to be stupid.



Quote
Though sometimes I feel sure you must know you're posting something wrong, or at least something technically correct but intentionally misleading or incomplete, and do it anyway - just to have a nice argument? I don't know - but it's not helpful.


This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!

AES 2009 Audio Myths Workshop

Reply #291
Well, it's pretty easy to do a null test and listen to the leftover difference products. To my ear it sounds like a good part of the residue is transient information, which would jibe with the subject reports that lossy files tend to lack depth/dimensionality or sound somehow "flatter".


Looks to me like a potential TOS 8 infraction.

AES 2009 Audio Myths Workshop

Reply #292
Ah, but in the case of a pipe organ you're looking at the wrong thing.


Says who?

I was up in some pipe organ chambers last night, and what I saw belied just about everthing that you say.


Quote
In the case of a pipe organ the pipes do not function as single units, they are part of an array inside a tone cabinet.


Incorrect. Some pipes are in the open, and some are in cabinets. Furthermore, the cabinets are generally at least partially open in actual use.

Furthtermore, my discusison was of bass tones and subwoofers, and the corresponding pipes in a pipe organ are always out in the open.

Finally, the purpose of the cabinets is to be a sort of acoustic EFX box, IOW they are there to intentionally and discretionarily distort the sound. That puts your discussion of them in the same category as someone who complains about the poor frequency response of tone controls when placed off-center.



AES 2009 Audio Myths Workshop

Reply #293
The phenomenon you are referring to is known as tube microphonics and is caused by defective tubes


This is incorrect. Virtually all tubes such as those commonly used in legacy audio and for EFX, are far, far more microphonic than their SS equivalents.  Tubes need not be defective tubes to be microphonic. There's a reason why shock mounts have been commonly used with tubes in critical applications all along.

Take just about any piece of tubed audio equipment,  subject it to some vibration and a good FFT analyzer will show measurable amounts of both AM and FM distortion. A corresponding piece of SS gear will perform several orders of magnitude better.



AES 2009 Audio Myths Workshop

Reply #294
This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!


Sorry, I tried to be polite, but this is getting pathetic. Either you are just getting stubborn on your old days or simply lacking even basic reading comprehension or logic skills. Maybe it is true that one becomes what he fights after just enough time.

2bdecided has made it very clear, that since it is hard to define test suite with fine-grained perceptual thresholds, it could make sense to define a set of safe thresholds. That would possibly include a hefty safety margin compared to common listening environments, but, for example, even -120 dB is realizable with commodity technology today. The benefit of a suite like this would be the ability to declare a component transparent once and for all. As long as nobody would be able to proof that the given test suite was inappropriate to declare 100% transparency, the transparency claim could be uphold for a whole class of components without having to battle over each one separately. Of course, before you try to play word games again, such a test suite would not prove that a component which doesn't meet its criteria is not transparent. But that wasn't the point of the proposal.

AES 2009 Audio Myths Workshop

Reply #295
I think the point is that some people seem to be looking for no-brainer tests. No-brainer tests are for people with no brains, right? ;-)
That's me! And, to make things worse, I don't even have golden ears.
So please help us define a set of objective, repeatable measurements to verify transparency.
(NB: this is not to say that devices that fail the test can't be transparent.)

AES 2009 Audio Myths Workshop

Reply #296
It seems to me that for Ethan's claims to be valid, the device under test must be a black box with respect to the test.  Hence, problems in perceptual coding boxes must be captured if we are to accept his claims.

The irony for this discussion is that given the design aims and criteria for coders, for any given level of measurement shortcoming (like the IMD of the MP3 linked above), it should be the LEAST perceptually noticeable effect one could achieve for that given result.  I.e. the .2% IMD in the example should be the least bad .2% IMD as judged by a wide population, if the coders are finding success.

Of course the codecs are multidimensional so actually my logic only applies to the whole suite of measurements, not a single one taken on its own.

Now that said, if the reproduction system used in the listening tests (and surely this is true for public tests) are deficient in some aspect, then that will mask the audibility of errors in that aspect.  For instance, the linked results show what is visually pretty massive smearing of a 1 kHz transient.  If the loudspeakers used for the test compress dynamics (and almost all do), then the perceptual test tells us more about the limitations of the loudspeakers than the limitations of human perception. 

Very anecdotally, I have had some success in blind testing of codecs by focusing on transients and how "free" they sound to me.

AES 2009 Audio Myths Workshop

Reply #297
Ethan, how can you say that a $25 converter card that only publishes specs at 1kHz (presumably because the response at other frequencies is all over the map) is better quality than my $100,000 Studer that has very tight published (and verified) specs from 20 Hz to 20kHz?


I see a number of false presumptions above:

(1) The false presumption that published specs somehow always limit the performance of a piece of equipment to be worse than the specified performance under all other conditions.

(2) The  false presumption that price performance is always the same, regardless of technological developments. If something costs more, it *has* to be better.

(3)  The  false presumption that specifications that are more tightly specified are always better than specifications that are loosely specified or not publicly specified.

(4) That  false presumption that  performance which is specified from 20-20 KHz is always better than performance that is specified over slightly narrower ranges.

The reality is that the core technology of a $25 SB card is sigma-delta converters, whose performance is inherently incredibly tightly controlled. They operate so almost totally in the digital domain that even their noise floors are tightly controlled. Typicaly, they either beat spec or don't work at all. 

In contrast an analog recorder's performance is highly dependent on routine maintenance, media and several large random variables. Its preformance otten changes measurably and even audibly while it is running.  While it is predictable that its performance will change during a recording session, it is not predictable the degree to which it will change or in which direction it will change. No analog recorder has been found to be sonically transparent in a sensitive ABX test.

Furthermore, due to wavelength effects, the response of an analog recorder below 80 Hz is rarely anything like flat. The operative phrase is "Head bumps".  In contrast, other than a low end roll-off due to coupling capacitors in the analog domain, the LF performance of a sigma-delta converter is nearly perfect. Finding that a sigma-delta converter is sonically transparent at reasonable quality levels, bit depths and sample rates is commonplace.

AES 2009 Audio Myths Workshop

Reply #298
This from someone who posts that every audio component must pass a -120 dB difference test to be sonically transparent?  The Stereophile forum is over there!

LOL!



2bdecided has made it very clear, that since it is hard to define test suite with fine-grained perceptual thresholds,


The means by which this may have been accomplished for certain naive readers is that the size of the grains was made infinitesimal, and the suite was required to work for everything from a subatomic IC to an audio system the complexity of a Boulder Dam-sized collection of the most minaturized microelectronics designed by Lex Luther. ;-)

If you define an impossible problem, I can pretty well guarantee that it won't be solved this week.

IMO, everybody who can't tell the diference between a mic preamp and a perceptual coder needs to first learn how to do that.

Some people around here would seem to need to read up on Schoepenhauer's 38 strategems, so that they can at least up the complexity of their pointless rhetoric! ;-)

AES 2009 Audio Myths Workshop

Reply #299
You frequently claim we can safely ignore anything that's more than 100dB down. Those graphs show junk that's a mere 50dB down.


The obvious flaw in the statement above is that saying that we can safely ignore anything that's more than 100dB does not necessarily preclude saying that in many or at least some cases we can safely ignore things that are as little as 50 dB down.

That doesn't even require masking, often applying Fletcher Munson is enough.

If I could just get some people to use good logic and rhetoric!

If I could just find a few good people who knew what a masking curve or an audibility curve was and how to apply it to a technical test report!