Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: [TOS #2, #5, #8, cluelessness and their responses]  (Read 33853 times) previous topic - next topic
0 Members and 2 Guests are viewing this topic.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #50

This is amazing.  I'm laughing, but annoyed at myself for spending so much time on this.  This is a cult-like religion for you guys, and even simple words get new meanings to suit your whims.  I am stunned.  Lunatics running the asylum, indeed.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #51
My previous answer to Notat's post was deleted in the lawnmower-fest that went through this thread this morning... But I think that's a breathtaking assertion to make.  I don't agree at all.
Before further forking this thread to discuss mathematical precision, please do us the favor of searching the forums for previous discussions. It is a topic most of us are quite familiar with. Here's one of the more recent discussions.

Implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.


Arnold said that analog consoles are linear. Linearity doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.


Exactly.  Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interferring signals. There are no other known kinds of system signal response faults. The list is constrained by the 2-dimensional nature of electrical signals.  Any of them can cause a system to be readily differentiated by means of listening if they are severe enough. It is all about quantification. 

I only mentioned nonlinear distortion, so it is hard to understand how one might logically progress from my statement to a statement that system (in this case analog console) sonics are non-differentiable. Straw man, anyone? ;-)

I'm willing to stipuate that analog console sonics are often readily differentiated based on noise, interferring signals, and linear distortion.  IME one of the most common causes of  differentiatiable sonics may be frequency response variations caused by improperly centered or misdesigned or otherwise poorly implemented tone controls.  Another problem that I often observe is that it is virtually impossible to readjust the controls of an analog console so that you recreate the same mix within say +/- 0.1 dB. If you manually move the controls during the mix, then that is nearly impossible to recreate precisely as well.  Also, it is not unusual to find mic preamps (a standard component of most consoles) that relate to audible differences because they load some microphones differently in ways that affect the microphone's  frequency response. It is not uncommon to find mic preamps with with built-in fixed (butnot always well-documented) roll-offs on the order of -3 dB at 50 or 80 Hz, which can be easy to hear as well.

Doing a proper listening experiment to compare analog console sonics seems like a probable waste of time now that good digital consoles are so readily available.


Pull the other one - it has got bells on! Please!

Digital consoles don't come anywhere close to the audio quality of good analog consoles. Perhaps their "specs" might appear to be better to the uninitiated, but specs do not tell the whole story as any audio engineer who actually spends a significant amount of time using them will tell you. They have advantages (for some people, anyway), but sound quality ain't one of them. (It's not really that they sound "bad" - in most cases - it's that they don't really sound GOOD.) Their primary advantages are in recallability and automation of all functions over very large mixes and onboard integration of advanced signal processing. They also generally have a much smaller physical "footprint" for a given number of channels due to stacking of functions (a major drawback in my personal opinion) and are much cheaper in terms of channel count and number of functions per dollar, which makes hem very attractive to bean counters. hey are also much cheaper to build due to reduced parts count, maximizing profits for the manufacturers.

Tell me, Arnold, do you actually use any consoles on a regular basis and if so, what are they?

"It is not uncommon to find mic preamps with with built-in fixed (butnot always well-documented) roll-offs on the order of -3 dB at 50 or 80 Hz, which can be easy to hear as well."

Let me guess - you actually own a console and it was mad by Mackie? or was it Behringer?

[TOS #2, #5, #8, cluelessness and their responses]

Reply #52
I'm a lot into vintage pinball. I imagine a Hydrogenaudio pinball edition would have a

KRUEGER
SUPER
POSTING
FRENZY

MULTIBALL!!!

mode, that would just have been lit.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #53
My previous answer to Notat's post was deleted in the lawnmower-fest that went through this thread this morning... But I think that's a breathtaking assertion to make.  I don't agree at all.
Before further forking this thread to discuss mathematical precision, please do us the favor of searching the forums for previous discussions. It is a topic most of us are quite familiar with. Here's one of the more recent discussions.

Implication you're making is that analog console sonics are non-differentiable.  (for modern kit).    I think that's a bold statement to make that will get you in trouble later.


Arnold said that analog consoles are linear. Linearity doesn't say a whole lot about about how a console sounds just that it doesn't generate certain types of distortion.


Exactly.  Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interferring signals. There are no other known kinds of system signal response faults. The list is constrained by the 2-dimensional nature of electrical signals.  Any of them can cause a system to be readily differentiated by means of listening if they are severe enough. It is all about quantification. 

I only mentioned nonlinear distortion, so it is hard to understand how one might logically progress from my statement to a statement that system (in this case analog console) sonics are non-differentiable. Straw man, anyone? ;-)

I'm willing to stipuate that analog console sonics are often readily differentiated based on noise, interferring signals, and linear distortion.  IME one of the most common causes of  differentiatiable sonics may be frequency response variations caused by improperly centered or misdesigned or otherwise poorly implemented tone controls.  Another problem that I often observe is that it is virtually impossible to readjust the controls of an analog console so that you recreate the same mix within say +/- 0.1 dB. If you manually move the controls during the mix, then that is nearly impossible to recreate precisely as well.  Also, it is not unusual to find mic preamps (a standard component of most consoles) that relate to audible differences because they load some microphones differently in ways that affect the microphone's  frequency response. It is not uncommon to find mic preamps with with built-in fixed (butnot always well-documented) roll-offs on the order of -3 dB at 50 or 80 Hz, which can be easy to hear as well.

Doing a proper listening experiment to compare analog console sonics seems like a probable waste of time now that good digital consoles are so readily available.

Digital consoles don't come anywhere close to the audio quality of good analog consoles. Perhaps their "specs" might appear to be better to the uninitiated, but specs do not tell the whole story as any audio engineer who actually spends a significant amount of time using them will tell you. They have advantages (for some people, anyway), but sound quality ain't one of them. (It's not really that they sound "bad" - in most cases - it's that they don't really sound GOOD.) Their primary advantages are in recallibility and automation of all functions over very large mixes and onboard integration of advanced signal processing. They also generally have a much smaller physical "footprint" for a given number of channels due to stacking of functions (a major drawback in my personal opinion) and are much cheaper in terms of channel count and number of functions per dollar, which makes hem very attractive to bean counters. They are also much cheaper to build due to reduced parts count, maximizing profits for the manufacturers.

It should be noted that the sonics of a device used for audio production are not always chosen purely on the basis of "specs", for several reasons. The first is that "specs" as printed on the typical factory produced spec sheet, do not tell the whole story.
This is because:
(1) To provide a proper analysis of as complex a device as a mixing console it would be necessary to provide a spec sheet the size of a small novel, which no company is likely to do and very few customers would be qualified to interpret, even if they could somehow be enticed to read the document in its entirety. As it happens, I used to run the electronics shop for Bill Graham's sound reinforcement company, and it was my job to provided analyses of our consoles. The typical data sheet FOR EACH CHANNEL that we generated was as complex as the data sheet that many console manufacturers provide for an entire console - and guess what? There is, in fact, significant deviation between individual channels, even individual mic preamps. And I'm talking top of the line gear here from companies like API, Soundcraft, Midas, and Chaos Audio.
(2) Testing methodologies are not standardized. Neither are the formats for representing the test results. There is sufficient variation between report formats used by different companies so as to make cross-company comparisons of spec sheets pretty much meaningless. About all you can do is look at the sheet and say, HMmmm, that looks pretty good. Yeah, that looks pretty good, too!

But even that is misleading, because the test environments used by different companies vary quite a bit. For example company A, a maker of top of the line professional recording consoles and components, tests their gear under worst case conditions - with low line voltage, in an environment full of RFI interference, after it's been running for several hours under a stack of newspapers so that it's nice and hot, with a wideband input signal and an output load that places maximum strain on the line amps. The gear survives these testing conditions and produces a respectable spec, which they the provide to the customer as the performance one can reasonably expect of the equipment operating in actual field conditions, which are seldom if ever ideal.

Company B, a manufacturer of inexpensive prosumer equipment, tests their gear at optimum line voltage, inside a Faraday cage so there is NO outside interference, with a band limited signal at nominal levels, into an ideal output load, and functioning at an ambient temperature of 75 degrees Fahrenheit. Unsurprisingly this produces a paper spec that also looks great - in fact it looks better than the published spec of Company A's gear, which costs over 10 times as much.

Does this mean that Company B's gear is better than Company A's?

No, it does not.

Does this mean that Company B's gear is even close to as good as Company A's?

No, it does not.

What does it really mean?

It mean that Company B optimize their testing methodology for advertising purposes, while Company A produces test results intended to reflect real world, even worst case, operating conditions (which would most likely destroy Company B's products within 15 minutes or less.)

How do I know this? Well, because back when I was working for Graham's sound company I destroyed an awful lot of gear that was submitted to us for testing, that's how I know.

There is a great deal of the same sort of specsmanship prevarication that goes on today in the rating of digital consoles.

But this isn't all.

There is another reason that high quality analog consoles are superior to digital consoles, and that is the fact that an audio console is NOT a scientific instrument, it is an artistic tool. In modern multitrack music production the concept of "fidelity" is largely a red herring, as there is, in fact, no "original performance" to be faithful to - the result is an illusion. No microphone detects sound the same way that the ear does, and no listener ever listens to a band by putting his ear up one inch from every speaker and drum head. Even the sound of acoustic instruments varies greatly with mic placement and the choice of microphone. It is the job of the recording engineer to craft these individual elements into an artistic whole, and to do this he uses tools which function in a euphonious manner; these are not always the tools that show the best paper specs. You can't tell how something SOUNDS by looking at a piece of paper. You just can't. You have to actually LISTEN to it.

An aside concerning a couple of other points you raised:

The type of low frequency rolloff in mic preamps is in fact common in preamps of less expensive, prosumer level, consoles, partly because these consoles often lack a high pass filter switch - so the high pass is built in always on, saving the cost of the switch. Professional level consoles, however, have a switchable high pass filter, sometimes one that has not only a switch but also a variable rolloff control.

Concerning your statements about resetting controls and resulting errors - If you're talking about measurement errors due to misadjustment of EQs, this is easily dealt with when measuring professional consoles by switching the EQ out of circuit, a function found on all professional quality consoles. If you're talking about recall of mix setting, this may be dealt with in a variety of way, depending on the particular console. Some consoles, such as the SSL or the Euphonix digitally controlled analog desks, have automated recall of all settings. Some non-automated consoles, such as the API, use rotary switches instead of pots for precisely repeatable settings. On consoles lacking either, the precise settings may be marked directly on the channel faceplace with a sharp grease pencil, available in a variety of colors for different cues. Low tech, but it works just fine.

(This post is based on one that got binned and has been edited and amended to better conform to the "culture" of this site.)

[TOS #2, #5, #8, cluelessness and their responses]

Reply #54
So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?

Are you being purposely obtuse or are you really not that smart?

This isn't exactly rocket science you know!

Linear distortion is any change in the signal that is not level dependent.

Non-linear distortion IS level dependent.


Wait a minute.

Distortion is, by definition, non-linearity.

So you've got "linear non-linearity" and "non-linear non-linearity"?

If the non-linearity does not change regardless of level it's linear? Isn't that an oxymoron?

Methinks that what Arnold said was probably not exactly what Arnold meant to say.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #55
I must confess...I've never heard of "linear distortion".  What is it?  Isn't ALL distortion by definition non-linear?

Systems can have only 4 general kinds of signal response faults: linear distortion (frequency and/or phase response errors) , nonlinear distortion, random noise, and coherent interfering signals.

Maybe you wouldn't classify frequency response errors as "distortion". Not everyone does. It's a just a terminology thing. Nothing to get hung up on.



So, basically, when Arnold says "linear distortion" then says "non-linear distortion", he means the same thing?


Not at all.

I believe that the following reference has just lately been cited, but the question above shows that people must not be taking the citation very seriously:

link to Wikpedia article about distoriton

So, to repeat something that is pretty fundamental in audio:

1. Linear distoriton - signal processing errors that are commonly quantified by frequency response and phase response estimates. 

2. Nonlinear distoriton - processing errors that are commonly quantified by THD, IM, jitter, and flutter and wow.

3  Noise - random signal source that in nature usually has a thermal origin. Pseudorandom noise is usually the reult of trying to approximate random noise by digital means.

4. Interferring signals - things like power line hum, communication signals, harmonics from switchmode power supplies, etc.


While I also use Wikipedia for non-critical references, they are by no means a technical authority and on a number of occasions have posted information which is in fact wrong. Wiki entries are posted and edited by what is essentially a committee of laymen - for technical definitions a higher standard is in order.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #56
0.002% is a -94 dB. It is impressive but, as compared to state of the art, not IMPRESSIVE.

Using a 1 kHz stimulus is an accepted means of doing this sort of measurement. The fact that they went to the trouble to describe how they did the measurement puts them above average in this department.

If you want to get a better idea of how equipment will sound with more realistic stimulus, you can do swept versions of the test - plotting distortion vs. frequency and/or amplitude. (Using DSP it is now possible to do these tests with normal program material as the test stimulus. The live sound guys routinely do this to assess acoustic performance of their systems during shows - cool stuff.)

For working equipment, the resulting graphs consume printed space, are boring and not many people know how to read them.

Accepted by you, perhaps. In professional circles the accepted means is over a frequency range of 20 Hz to 20 kHz, minimum.

The fact that they mention - in very tiny print - that it's a 1K measurement simply means that they wish to adhere to the letter of the US law as pertains to false advertising, which, I suppose is somewhat commendable given that some companies don't even do that and federal enforcement of truth in advertising laws is currently at an all time low.


[TOS #2, #5, #8, cluelessness and their responses]

Reply #57
Arnold, do you really assert that the "niche" aspect of the market is relevant?  Is this not about objectivity?


Common sense says that the presence of a few excepetions doesn not necessarily invalidate the rule.


"Common Sense" and scientific testing are two very different things.

This is one of the primary reasons that the results of ABX testing are so frequently misinterpreted and alleged to prove things that, in fact, they do not.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #58
Not all analog mixers are op-amp based. Many of the better ones use discrete circuitry. In fact, one of the primary reasons for the popularity of much "vintage" gear is that it does NOT contain opamps. Any gear that operates in Class A does not, by definition, use opamps because there ain't no such thing as a "Class A opamp".


An op-amp is a design topology and does not in any way imply an IC.  There are discrete op-amps as well as IC ones.  Solid-state feedback power amplifiers use the op-amp topology.  Nobody makes a class A IC op-amp because of power dissipation considerations. For the discrete case, many class A op-amps exist.  And by putting a DC constant-current load at an IC op-amp output, one can easily make it class A up to a current limit.

No, nobody makes a Class A op-amp because op-amp topology precludes class A operation.  Class A is a particular topology and op-amp design doesn't use it. Please refer to your basic texts on circuit design. Op-amps use a bi-polar circuit configuration (that's how they get the + and - inputs), whereas Class A is by definition uni-polar. (I'm not certain if my terminology is exactly correct by modern standards here, please don't play the semantics card.)

And yes, I'm well aware of non-chip op-amp designs. I even used to own a few REALLY ANCIENT military surplus op-amp modules that employed 12AX7 tubes as the active elements. API gear is full of discrete op-amps. You notice they don't claim class A operation for any of their stuff...... I could go on, but you get the idea, I'm sure.........

[TOS #2, #5, #8, cluelessness and their responses]

Reply #59
You tried to do something very good and commendable.  I applaud that.  But when you open the door of the steel rebar shark cage and go swimming with the big fishes, you have to make sure your kit is in order.


Two points.

One is that believe it or not, the womb does not contain any big fishies of the kind that Ethan has not already dealt with.

The second is that there was at least one fishie on stage with Ethan, and had Ethan said anything unorthodox, that fishie would have eaten Ethan alive on the spot - the fishie being James Johnson (JJ).

Quote
My only concern, is that in 5 years this stuff starts coming back and turning into "settled fact", when in reality it is not quite settled.


The presentation was at an AES mneeting, which has additional large fishies in it that would have been happy to race JJ for pieces of Ethan to gnaw on.

Quote
Your video has lots of GREAT stuff in it, and yet just enough stuff that's problematic, to cause a "fail".


Like Ethan I'm looking for the stuff that is problematical. Where's the beef?


Quote
Here's what I suggest:  Consider this whole process to be a form of peer review.  Socratic method, all that kind of jazz.  Take this challenge, and apply it to what you've done in that video, and come back with something that leaves me with no other option but to shut up and say "yup."


I see no challenge.

Quote
By the way...I didn't say that different dithers were audible.  What I said was that different dithers are chosen based on the downstream processing that will occur.  Some kinds of dither are "fragile" with subsequent processing.


New Science, anybody?  If everything downstream is doing its job, there is no such thing as fragile or durable dither. If you have concerns about downstream processing, you just use more dither than the bare minimum.

Also pehaps you mssed the discusison about self-dithered program material?  It is for real. I first encountered it in a digital transcription of a 1/2" 15 ips stereo tape.  The person who did the transcription provided digital files of the same tape transcribed using various kinds of dither. In most cases there was no discernable change to the noise floor because of the relatively large amounts of analog tape noise and other environmental noise that was already there.


"In most cases"....... HMmmmm....... That would imply that in at least some cases the was, in fact, a discernible change.

And it's interesting that you choose an example that contains a large amount of pre-existing masking noise.........

[TOS #2, #5, #8, cluelessness and their responses]

Reply #60
A definition doesn't become circular just because two sentences contain the same term.


No, but it runs the risk of being circular when each of those sentences uses a word to define itself.

Quote
The definition was indeed very precise.


Yes, precisely circular.


Quote
Deriving a to be defined term from an already well defined one, as linear equations are, is not circular.


I've never actually seen a text that stopped after defining linear as being not nonlinear, and nonlinear as being not linear.

Mostly they use words like transitive and intransitive, right?

The definition of nonlinear as being something that creates new frequencies is actually pretty orthodox in audio.  I could cover this forum with refereneces, but that would seem to be a poor use of my time, given the ongoing treatment of the 2 that I provided. 

[TOS #2, #5, #8, cluelessness and their responses]

Reply #61
if your tubes produce ringing in the speakers when you tap on them they should be replaced with ones that don't, or at least wrapped with rubber bands ...



where there's smoke, there's fire, and where there's rubber smoke, there's a horrible stench.  But hey, we don't hear with our noses, do we?

[TOS #2, #5, #8, cluelessness and their responses]

Reply #62
The next question is which would they label high-fidelity? For me "high-fidelity" brings to mind my grandfather's open reel tube rig. That's the sound and era I associate with "high-fidelity".

Why not be more explicit and ask which has more "accurate reproduction"?
Because high fidelity means accurate reproduction.



Soundblaster is therefore the "tiger woods" of the audio chain.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #63
and Ethan, I am MOST ASSUREDLY ALSO not talking about intentional distortion and effects introduced into the system.  I'm talking about the fidelity of the system.  I'm talking about how well sound was captured.

Okay, just to be perfectly clear, you still refuse to acknowledge that I distinguished between "what some people think sounds better" and "what is most accurate" even after showing you links to my posts from 3 months ago? And it's still your position that I have only recently "started making a new distinction between reproduction systems and production systems?" And MM says that I'm the one who needs to post a retraction!

Quote
As far as your invocation of the "generations" argument vis a vis the soundblaster verses the studer...I think it is a false strawman.

Really? Why? If a medium sounds fairly clean after one generation, but you need to assess the degradation anyway using only ears, why is a multi-generation test not suitable for both the goose and for the gander?

Quote
But the "soundblaster" problem is a two-fold problem.  The data storage and retrieval aspects are great, but it's the conversion process itself that's damaging.

You have said that countless times. Yet now, three months later, Ethan is the only person who has ever shown data. And lots of data at that! Where is the data from dwoz showing the "damage" done by one generation through a SoundBlaster card? Where are dwoz's audio example files proving "The smokestack is sheared off and damaged the first time through?" If you'd spent 1/100th as much time doing some tests as you've spent posting about my AES video in all the forums, you'd be a lot more credible.

--Ethan


Let's see - a few posts back you said the Delta card had noise 90dB down below 2K. So let's take 20 iterations of that.
90,87,84,81,78
75,72,69,66,63 (oops, worse than the Studer already)
60,57,54,51,48
45,42,39,36,33
30,27,24,21,18

That's right, after 20 iterations the noise floor of your Delta card is 18dB down below 2kHz. I'd say that's pretty significant, wouldn't you? In fact, I'd say that performance is pretty poor - far from "transparent".

[TOS #2, #5, #8, cluelessness and their responses]

Reply #64
It is my contention that the state of the art of audio measurement and the state of the science of human audio perception at tis time are not accurate enough really adequately quantify what we're discussing.


John, it is easy to show that your ideas about state of the art of audio measurement and the state of the science of human audio perception  are from the stone age.


Quote
I'm not saying that the electronic measurement equipment isn't good enough - I'm sure it can measure the signal quite well.


And in ways that it is quite clear you have no working knowlege of. At least you've done a good job of hiding it and ignoring it when pointed out to you.

Quote
The problem is that we don't know how to interpret the measurements properly and in some cases we may not understand what needs measuring.


Again John, at your apparent hihgly limited level of understanding, that might be true.

There seem to be a lot of misapprehensions about what we don't know about these things - apparently even from people who should know better, as in they earned PhDs in related areas.



Hmmm... Interesting. As I've mentioned before I was the service and test technician for the audio department of Bill Graham's FM Productions. My test bench included a Sound Technology distortion analyzer and scopes, signal generators, a various other test equipment from Tektronix, Hewlett-Packard (now Agilent), Fluke, and others, which I used on a daily basis in carrying out my duties analyzing, evaluating, repairing, and modifying professional audio equipment. This was only one of a number of jobs I've held servicing audio equipment. So I'd say that I have at least a modicum of working knowledge of the use of audio test gear.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #65
What are the results of applying what you know about masking and the variable sensitivity of the ear with frequency to the rightmark curves you made? Presume that FS = 90 dB.
I didn't make them. I wouldn't asses audio codecs in this way. And now you're making me part of the measuring equipment! How on earth can that work?!


Anyway, I won't duck your question: Looking at the curves, my experience tells me (even without the labels) that it's gone through an MPEG-like filterbank and/or psychocoustic model based noise addition process.


That's a truism, so you have thus far ducked the question.

Quote
Hence my experience tells me that sine wave tests are pretty much irrelevant in this context (they won't reveal the faults of the codec), therefore these graphs are pretty much irrelevant, and whether the codec under test is any good will have to be determined using another method altogether.


That would be among other things:

An appeal to your own personal authority on multiple points.
i.e., Ducking the question.

Quote
Anyway, googlebot keeps explaining the point very patiently and compactly, so I don't think I need to repeat it.


That would be among other things:

An appeal to your an  authority with totally unknown qualifications that happens to agree with you.
i.e., Ducking the question.

Quote
Some numbers have emerged in this thread. That's a good start.


They were wrong as I have shown by means of something that seems to be very rare around here - patient calculation based on actual orthodox knowlege. Actually, about a minute with an Excel spread sheet.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #66
Hmmm... Interesting. As I've mentioned before I was the service and test technician for the audio department of Bill Graham's FM Productions.


God help Bill Graham!

I'm serious.

Quote
My test bench included a Sound Technology distortion analyzer and scopes, signal generators, a various other test equipment from Tektronix, Hewlett-Packard (now Agilent), Fluke, and others, which I used on a daily basis in carrying out my duties analyzing, evaluating, repairing, and modifying professional audio equipment. This was only one of a number of jobs I've held servicing audio equipment. So I'd say that I have at least a modicum of working knowledge of the use of audio test gear.


Fancy toys and an impressive sounding title doesn't prove anything at all.

I own (that is in legally own - not my boss's) an Audio Precision test set that I take great pleasure in not using because I can get faster, more sensitive results out of a PC with M-Audio interface and some freeware.

I also personally own any number of pieces from HP and Fluke which I do actually use. ;-)

I've been professionally engaged to  design, install, operate and  maintain everything from consumer audio gear to military radar systems that spread over a quarter mile square, to computer systems that filled very large computer centers that were powered from mulitple substations, to SR systems involving dozens of mics, to automotive systems, to I guess you name it. Then there is my life long audio hobby. I truely was designing and building my own audio amps in 1960.

Your comments about recapping and biasing pretty well tell a story - someone who misunderstands stuff he reads, and apparently did not use all that fancy gear to a reasonable extent.

Ever put the caps you pull on a bridge?

Ever do a thorough set of bench tests before and after? 

Seems like not so much.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #67
Is it worth my time unpicking this post?

No, but it's lunch, and it's raining, so here goes...

What are the results of applying what you know about masking and the variable sensitivity of the ear with frequency to the rightmark curves you made? Presume that FS = 90 dB.
I didn't make them. I wouldn't asses audio codecs in this way. And now you're making me part of the measuring equipment! How on earth can that work?!

Anyway, I won't duck your question: Looking at the curves, my experience tells me (even without the labels) that it's gone through an MPEG-like filterbank and/or psychocoustic model based noise addition process.

That's a truism, so you have thus far ducked the question.

"A truism is a claim that is so obvious or self-evident as to be hardly worth mentioning" (wikipedia).

The fact that the spectral shape of the noise matches that introduced by well-known lossy encoders is a truism?

I'd have thought it was rather fundamental to the matter at hand. And it needs checking. IMO!

It's not some weird FM-related process that might look a bit similar (have you seen either side of 19kHz in an FM-stereo radio broadcast?).

It's not the shape of actual real measured human ear masking curves (they don't fall off to nothing  )

It is the shape of a specific lossy codec's implementation of them.


Quote
Quote
Hence my experience tells me that sine wave tests are pretty much irrelevant in this context (they won't reveal the faults of the codec), therefore these graphs are pretty much irrelevant, and whether the codec under test is any good will have to be determined using another method altogether.

That would be among other things:

An appeal to your own personal authority on multiple points.
i.e., Ducking the question.

Hang on - you asked me "What are the results of applying what you know..." - and now you're upset because I did?

I didn't apply what I know "about masking and the variable sensitivity of the ear with frequency" to the curves, because the answer to that is obvious: the noise shown is probably inaudible, but you'd have to check some psychoacoustic model to be sure, and even then you'd be applying some assumptions (and hoping the model was correct - especially if the results were borderline).

I did better than answer your narrow question - the results of applying what I know are this: it's a lossy codec, so this is the wrong test to determine whether it's transparent.

Quote
Quote
Anyway, googlebot keeps explaining the point very patiently and compactly, so I don't think I need to repeat it.

That would be among other things:

An appeal to your an  authority with totally unknown qualifications that happens to agree with you.
i.e., Ducking the question.

I'm not appealing to authority - I'm telling you I can't be bothered to re-write what he's already written well enough. Whether you believe what he says is entirely up to you - though I'm not sure either of us are claiming anything - merely exploring a possible way forward to answer the question (what are the objective measurements that guarantee transparency) that may (or may not!) be at the heart of this thread.


Quote
Quote
Some numbers have emerged in this thread. That's a good start.

They were wrong as I have shown by means of something that seems to be very rare around here - patient calculation based on actual orthodox knowlege. Actually, about a minute with an Excel spread sheet.

Not those numbers (assuming you mean the noise numbers).


Really - you argue like someone who only wants to argue.

You seriously expect us to believe you're not aiming at number 8 on Schoepenhauer's list?

Either you don't have the abilities to debate this properly, or you act like you don't (amongst other tactics) to stop the discussion ever really getting anywhere.

If it's senility, than I genuinely apologise - but I don't think it is, and I don't think you're stupid, so I'm not insulting you - I'm accusing you of intentionally crippling useful discussions by using the techniques you've honed in your battles with the audiofool brigade. Unless you can get over this, and approach HA in a different manner, any discussion you start or join is at least painful, and at worst pointless IMO.

I admit we get some audiofools in here too - but maybe for everyone's santiy you should leave those to the mods to deal with?

Cheers,
David.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #68
My sympathy, 2bdecided.

I don't know wether mods are tired of reading this thread completely and just delete the extremes, but the games Arnold plays here are really annoying. There have been numerous attempts for interesting, technical debate, all blown away by whole pages(!) of senseless nit-picking within just hours. Are bigger names excluded from TOS 2 enforcement? I have followed Hydrogenaudio anonymously for many years and I never had that impression. But maybe there just weren't that many precedents of (once?) bigger names completely plummeting into constant cock play.

On the other hand, Arnold is maybe just adhering TOS 5 (or exploiting a loophole). It says:

Quote
If posting to an already existing thread, they must continue in the vein of discussion that the thread has already manifested;


So once he had conquered the thread by flooding it with large amounts of cock play (some pages in this thread are almost exclusively written by him), the thread was at some point required by TOS 5 to continue in the vein of a pissing match! 

I would try once more to catch up on the black box issue, but for this thread, I see less and less sense...

[TOS #2, #5, #8, cluelessness and their responses]

Reply #69
If it's senility, than I genuinely apologise - but I don't think it is, and I don't think you're stupid, so I'm not insulting you -


38 Become personal, insulting and rude as soon as you perceive that your opponent has the upper hand.

Quote
I'm accusing you of intentionally crippling useful discussions by using the techniques you've honed in your battles with the audiofool brigade.


The primary means by which I have done whatever I did to the audiophools was:

(1) Invent ABX

(2) Develop an intimate understanding of the audiophool problem on various Usenet forums.

(3) Address digital and PC paranoia with www.pcavtech.com

(4) Make ABX available to everyman with www.pcabx.com

Quote
Unless you can get over this, and approach HA in a different manner, any discussion you start or join is at least painful, and at worst pointless IMO.


The secondary audiophool problem is exactly with people who should know better, but are uncomfortable with the relevant facts behind very reasonable presentations such as Ethan's. 

[TOS #2, #5, #8, cluelessness and their responses]

Reply #70
@Arny,

God bless you for ABX.

But if you think 80,035 usenet postings (according to Google Groups) haven't turned you into a different person, then I think you're fooling yourself (but no one else).


Quote
The secondary audiophool problem is exactly with people who should know better, but are uncomfortable with the relevant facts behind very reasonable presentations such as Ethan's.

I'd be comfortable if we could nail these black box measurements + thresholds which guarantee transparency!

Cheers,
David.

[TOS #2, #5, #8, cluelessness and their responses]

Reply #71
@Arny,

God bless you for ABX.

But if you think 80,035 usenet postings (according to Google Groups) haven't turned you into a different person, then I think you're fooling yourself (but no one else).


It seems like I can pretty well count on you to:

(1) Understate the true facts by not doing enough reasearch. I believe that the actual number of Usenet posts of mine is closer to a half million or more. I have posted under more than one name, just as a consequence of changing ISPs when they went out of business, etc.

(2) Turn said clumsy understatement based on shoddy research into an insult.

Of course I know that I was changed by and during all of that posting. Perhaps you think that imposing your personal limiations, namely lack of self-awareness, on other people in public is not insulting? 

For one thing I have been posting on Usenet since over 20 years ago, and that is a third of my life. If you don't intend to be changed over the passing of a third of your life, you are in for some harsh lessons! ;-)

Quote
Quote
The secondary audiophool problem is exactly with people who should know better, but are uncomfortable with the relevant facts behind very reasonable presentations such as Ethan's.

I'd be comfortable if we could nail these black box measurements + thresholds which guarantee transparency!


The 100 dB + 0.1 dB criteria is very conservative. The 100 dB number is even in an ITU document - BS 1116.


[TOS #2, #5, #8, cluelessness and their responses]

Reply #73
I've split off some of the ad hominem back-and-forth into the Recycle Bin.

I'm not going to tolerate any of it in this thread from here on. Any post that even makes me think it could be taken personally will be binned with no consideration to its technical value.

C'mon guys, we can do better than this.

Thank you

[TOS #2, #5, #8, cluelessness and their responses]

Reply #74
And no, it's not reverb, which consists of acoustical reflections in a medium. This is resonance. Ethan, of all people I would expect you to understand the difference.

Are you being intentionally dense, or just intentionally insulting? What part of "I suppose you could call that reverb, but I call it ringing because it has a single dominant frequency" do you not understand?

--Ethan
I believe in Truth, Justice, and the Scientific Method