Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: We will only ever get to hear half of the music? (Read 9475 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

We will only ever get to hear half of the music?

I found some interesting materials from http://www.davidgriesinger.com/intermod.ppt

Quote
Human hearing is inherently non-linear


Hair cells fire when the ion channel controlled by the hair opens. This causes a burst of neural activity at the zero-crossings of the pressure waveform. This process is similar to a half-wave rectifier followed by a differentiator.

All the sounds we hear pass through this asymmetric non-linear system. We perceive the signals as undistorted only through the action of the filters in the basilar membrane. These filters are not particularly effective at low frequencies!

Hair cell firing




Hair cells act as a half-wave rectifier. We are unaware of the (negative) half of the waveform.

Result of the half-wave rectification

- The pitch of low frequencies is determined not through the basilar membrane filters, but through the time intervals between nerve firings.
- Consequently we cannot distinguish between real frequencies and subharmonics generated through the half-wave rectification process.

- This leads to the well-known phenomenon of “false bass”
Listening to two tones that are harmonically related will often produce the perception of the fundamental. For example, a tone at 50Hz will be heard when 100Hz and 150Hz are played together.

- Complex low frequency signals, such as a minor triad, are heard as an un-interpretable mix of fundamentals and harmonics.
Composers – outside of grundge rock – tend to avoid them!

I find that mind-boggling. Does that means that we will only ever get to hear half of the music (so to speak)?

Can someone explain how the "basilar membrane filters" works?

What is even more mind-boggling is that the way the nerve cells fire at positive zero crossing of the waveform - that is a quantum even that is similar to taking a digital sample, more specifically one sample per cycle. Does that mean that human hearing is digital in nature? How does the nerve impulses got assembled into waves form again?

We will only ever get to hear half of the music?

Reply #1
Can someone explain how the "basilar membrane filters" works?

http://en.wikipedia.org/wiki/Critical_band Also take a look at ERB.
Some animation at the beginning of this talk: http://mitworld.mit.edu/video/426

Quote
What is even more mind-boggling is that the way the nerve cells fire at positive zero crossing of the waveform - that is a quantum even that is similar to taking a digital sample, more specifically one sample per cycle. Does that mean that human hearing is digital in nature? How does the nerve impulses got assembled into waves form again?

Sort of, but human hearing does this "A/D conversion" at every place along the basilar membrane. So it kind of acts as a digital spectrum analyzer.

Chris
If I don't reply to your reply, it means I agree with you.

We will only ever get to hear half of the music?

Reply #2
Thanks for the links. Intriguing stuff, a far cry from the audiophile bullshits. What got me interested in this subject was because I brought some sound equipments some months ago, and in the process I was bombarded with so much bullshits that I found that I had to educated myself out of it.

It does look like the nerve firing was the last happening in the ear, after which the nerve pulses are sent to the brain to be re-interpreted as the sounds we hear. Is the nerve cell firing a truly quantum event? How is the amplitude represented? Or does the nerve signal somehow trace the shape of the signal?



We will only ever get to hear half of the music?

Reply #4
Is the nerve cell firing a truly quantum event? How is the amplitude represented? Or does the nerve signal somehow trace the shape of the signal?

It's a stochastic thing IIRC. The higher the amplitude, the more of the thousands of nerve cells fire, i.e. the higher the likelyhood of firing. At the threshold of hearing, the firing rate and pattern doesn't differ much from the "background" random/spontaneous firing (that's the ear's internal noise, if you will).

More in Brian Moore's book, reference 10 of the above Wiki page.

Chris
If I don't reply to your reply, it means I agree with you.

We will only ever get to hear half of the music?

Reply #5
Hair cells act as a half-wave rectifier. We are unaware of the (negative) half of the waveform.
Be very careful - while this is true of the actual motion of the basilar membrane, it's not true of the air-born sound pressure wave. If the actual sound wave was asymmetric, it would have a different harmonic structure, i.e. would contain different frequencies, which would give rise to different basilar membrane motion (i.e. oscillations at other points on the basilar membrane) which would be transduced by the hair cells at those points.

So if you cut the bottom of a sine wave, you will hear the difference!

Cheers,
David.

We will only ever get to hear half of the music?

Reply #6
Huh. And somebody says the ear isn't phase sensitive?


First, lets consider the meanings of the relevant words. In this discussion the two words are phase and polarity.

The half-wave rectification property of the ear would seem to be primarily about polarity, not phase.

Nobody who is well informed says that the ear isn't phase sensitive. As usual, it all depends. What sort of phasing sitaution are you speaking of?  The ear is very sensistive to certain situations involving phase and insensitive to others.

Our appreciation for the half-wave nature of the detection of vibrations along the basilar membrane needs to be informed by the fact that the basilar membrane tends to work like a filter bank. If we are trying to uderstand this situation, we need to consider what a filter bank does to asymmetrical waves. 

Filter banks tend to turn asymetrical waves into symmetrical waves. So, by the time we get around to stimulating these half-wave detectors, the wave doing the stimulation may look considerably different from what we sent down the ear canal.

If you want to try to predict how the ear responds, you have to consdier the whole ear, not just one part of it.

We will only ever get to hear half of the music?

Reply #7
Huh. And somebody says the ear isn't phase sensitive?


First, lets consider the meanings of the relevant words. In this discussion the two words are phase and polarity.

The half-wave rectification property of the ear would seem to be primarily about polarity, not phase.

Nobody who is well informed says that the ear isn't phase sensitive. As usual, it all depends. What sort of phasing sitaution are you speaking of?  The ear is very sensistive to certain situations involving phase and insensitive to others.

Our appreciation for the half-wave nature of the detection of vibrations along the basilar membrane needs to be informed by the fact that the basilar membrane tends to work like a filter bank. If we are trying to uderstand this situation, we need to consider what a filter bank does to asymmetrical waves. 

Filter banks tend to turn asymetrical waves into symmetrical waves. So, by the time we get around to stimulating these half-wave detectors, the wave doing the stimulation may look considerably different from what we sent down the ear canal.

If you want to try to predict how the ear responds, you have to consdier the whole ear, not just one part of it.


Indeed.

And since the filter responses change with level, it's even worse, you'd have to include each individual's level of hearing impairment, the absolute level, and a lot of other stuff.

But the point that Arny makes remains, the filters DO tend to reshape the waveform at any point on the basilar membrane. The place that this would probably matter would be for pitchy signals (i.e. signals consisting of a pulse train not that heavily filtered, that look like vertical lines on a cochlear response plot), it would change the detection point for different frequencies differently.

I think the term "co-articulation" has to do with this, but that's a supposition that is consonant with the data, but lacking in precise evidence.
-----
J. D. (jj) Johnston

We will only ever get to hear half of the music?

Reply #8
Huh. And somebody says the ear isn't phase sensitive?


I don't know anyone who says that at all.  In fact the ear is phase sensitive and uses relative phase to determine direction, and does it very well.  Just try changing the relative phase of a signal in only one speaker and see what happens. It just doesn't use phase to detect tonal quality.  It uses relative frequency amplitude to do that.

Oh, and at the end of this terribly non linear system lies the most powerful computer in all the known universe equipped with extremely sophisticated sound processing routines the like of which have not yet been equaled by our technology. In fact this computer uses these non-linearities for it's own purpose, namely the survival of the organism which it is a subsystem of.


Ed Seedhouse
VA7SDH

We will only ever get to hear half of the music?

Reply #9
How does the basilar membrane response to complex waveforms?

For a while I thought that th basilar membrane is resonance structure that acts like a spectrum analyzer - but apparently it is not.

"Von Bekesy convincingly demonstrated that sounds set up a traveling wave motion along the basilar membrane and this traveling wave motion is the basis for the frequency selectivity and not resonance of the basilar membrane as proposed by ...."

So how much of the spectrum is lost in the process?

We will only ever get to hear half of the music?

Reply #10
Huh. And somebody says the ear isn't phase sensitive?


First, lets consider the meanings of the relevant words. In this discussion the two words are phase and polarity.

I think 'phase' is not a useful term for non-periodic signals (e.g. music). 'Phase inversion' is also a misnomer. For periodic wave forms we can talk about 10degree phase shift, 45degree phase shift etc, but 'phase inversion' is a meaningless term.

It is polarity inversion that concern us here.

We will only ever get to hear half of the music?

Reply #11
How does the basilar membrane response to complex waveforms?
...

See the animation I linked to above. From Wikipedia: "Because of the structure of the cochlea and the basilar membrane, different frequencies of sound cause the maximum amplitudes of the waves to occur at different places on the basilar membrane along the coil of the cochlea.[1]".

Chris
If I don't reply to your reply, it means I agree with you.

We will only ever get to hear half of the music?

Reply #12
How does the basilar membrane response to complex waveforms?

For a while I thought that th basilar membrane is resonance structure that acts like a spectrum analyzer - but apparently it is not.

"Von Bekesy convincingly demonstrated that sounds set up a traveling wave motion along the basilar membrane and this traveling wave motion is the basis for the frequency selectivity and not resonance of the basilar membrane as proposed by ...."

So how much of the spectrum is lost in the process?


You're seeing an age-old argument.

The ear and basilar membrane are a spectrum analyzer, no matter how you look at it. vonBekesy's "solution" is simply another way of making the ear into a spectrum analyzer.

The reality? vonBekesy and Zwislocki are both right, if you want my opinion.
-----
J. D. (jj) Johnston

We will only ever get to hear half of the music?

Reply #13
You're seeing an age-old argument.

The ear and basilar membrane are a spectrum analyzer, no matter how you look at it. vonBekesy's "solution" is simply another way of making the ear into a spectrum analyzer.

The reality? vonBekesy and Zwislocki are both right, if you want my opinion.

Thanks. That is very clarifying. It could be that the sound waves travels across the basilar membrane (vonBekesy) and sets of multi-modes vibrations along the membrane (Zwislocki).

It would be easy to simulate this using finite element analysis. Are you aware of any being done?

 

We will only ever get to hear half of the music?

Reply #14
You're seeing an age-old argument.

The ear and basilar membrane are a spectrum analyzer, no matter how you look at it. vonBekesy's "solution" is simply another way of making the ear into a spectrum analyzer.

The reality? vonBekesy and Zwislocki are both right, if you want my opinion.

Thanks. That is very clarifying. It could be that the sound waves travels across the basilar membrane (vonBekesy) and sets of multi-modes vibrations along the membrane (Zwislocki).

It would be easy to simulate this using finite element analysis. Are you aware of any being done?



Oh my, yes, but let's say that agreement is hard to come by.
-----
J. D. (jj) Johnston