## Topic: filtering, dither, and noiseshaping (Read 52581 times)previous topic - next topic

0 Members and 1 Guest are viewing this topic.
• pdq
filtering, dither, and noiseshaping
##### Reply #75 – 25 March, 2008, 09:04:50 AM

I do not understand your use of the phrase 'by definition'.

I think I can answer this one. A series of samples at 44.1 khz cannot be used to represent any frequency above 22.05 khz because there will be a perfectly valid frequency below 22.05 khz which also goes through those exact same samples. Therefore 'by definition' all of the resulting frequencies are below the Nyquist limit.
Thanks, that's a lot clearer than what I wrote.

Thank you cabbagerat. That is high praise indeed coming from you.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #76 – 25 March, 2008, 10:25:15 AM
A series of samples at 44.1 khz cannot be used to represent any frequency above 22.05 khz because there will be a perfectly valid frequency below 22.05 khz which also goes through those exact same samples. Therefore 'by definition' all of the resulting frequencies are below the Nyquist limit.

Yes, this is true of a continuous sine wave.  And it is all part of sampling theory since Nyquist.  It is why filters are used immediately prior to an analogue to digital conversion process.

I am not sure though that this is the analysis that is relevant to white noise (or its variants), as white noise consists of random events rather than natural waveforms such as a vibrating string, or the sound of a pipe organ. The only way to asynchronously sample white noise created in the 44.1KHz format would be to sample it at over 88.2KHz.  Alternatively it can be captured by phase locking the sample rate, i.e. sample at 44.1KHz in phase with the 44.1KHz creation.  This may sound like double Dutch.  I am saying that transferring a digital stream undisturbed is a special case of sampling it.  Some people may understand what I am trying to say here, particularly after reading the next two paragraphs.

In video terms, this is like a aligning a 1920x1080 pixel video camera in front of a 1920x1080 test pattern such that the test pattern pixels line up in a perfect one to one correspondence with the pixels in the camera. [Actually practical high performance video cameras have optical filters to avoid optical aliasing, but if you removed the optical filter you could get a perfect 1920x1080 result.]  This is the exception to Nyquist: if a signal varies at the sampling rate, and the sampling coincides with that variation, a perfect sampling can be done:  synchronous sampling.

Putting this in more familiar terms, if you created a square wave at 44.1KHz, and you modulated the height of the top step and independently the bottom step of each of the square wave cycles to convey data, you could perfectly recover that data by sampling the square wave at 44.1KHz locked in phase to the middle of each step: synchronous sampling.  If you could not do synchronous sampling you would need to sample at over 88.2KHz to recover the encoded data [actually for an encoded wave of such complexity and precision you might need quite a bit more than 88.2KHz of conventional asynchronous sampling not optimised to the characteristics of the waveform].

The frequency analysis algorithms built into cool edit pro etc are not designed to display frequencies above fs/2.  The concept of frequency of a wave created by a random number generator is a difficult concept.  A random wave really has no frequency.  Any readout of frequency is as a result of chance.  For short periods (of sufficient duration to be measured) the random wave behaves in a similar manner to a continuous wave of a particular frequency.  An extremely quickly changing waveform cannot be recognized by the frequency analysis algorithm.  So the example in my post above of +1, -1, +1, -1 would be ignored by the analysis algorithm, as it has no normal meaning in a 44.1KHz asynchronous sampling environment, even though if listened to by a bat would be at 44.1KHz!

Anyway I'll follow up on the  software cabbagerat has referred to and see what happens when I record the dither produced by the software.  Cheers.

• cabbagerat
filtering, dither, and noiseshaping
##### Reply #77 – 25 March, 2008, 11:12:29 AM

A series of samples at 44.1 khz cannot be used to represent any frequency above 22.05 khz because there will be a perfectly valid frequency below 22.05 khz which also goes through those exact same samples. Therefore 'by definition' all of the resulting frequencies are below the Nyquist limit.

Yes, this is true of a continuous sine wave.  And it is all part of sampling theory since Nyquist.  It is why filters are used immediately prior to an analogue to digital conversion process.
And true of all signals (ok, provided they meet a variety of conditions, none of which are important here).
I am not sure though that this is the analysis that is relevant to white noise (or its variants), as white noise consists of random events rather than natural waveforms created by vibrating objects. The only way to asynchronously sample white noise created in the 44.1KHz format would be to sample it at over 88.2KHz.  Alternatively it can be captured by phase locking the sample rate, i.e. sample at 44.1KHz in phase with the 44.1KHz creation.  This may sound like double Dutch.  I am saying that transferring a digital steam undisturbed is a special case of sampling it.  Some people may understand what I am trying to say here, particularly after reading the next two paragraphs.
Let me address this before the next two paragraphs. The sampling theorem, as originated from Shannon, Kotelnikov, etc. refers very specifically to a particular interpolation process. The theorem states that samples can be taken of a signal and interpolated with a specific process to produce the original signal if and only if the original signal only contained frequencies below fs/2 Hz. That particular interpolation process is widely called Sinc interpolation.

You are not seeking to produce a noise signal with frequencies in (0, 44100), you are seeking to produce samples of a noise process bandlimited to (0, 22050Hz). That noise process is bandlimited, so sampling at 44100Hz is just fine.
In video terms, this is like a aligning a 1920x1080 pixel video camera in front of a 1920x1080 test pattern such that the test pattern pixels line up in a perfect one to one correspondence with the pixels in the camera. [Actually practical high performance video cameras have optical filters to avoid optical aliasing, but if you removed the optical filter you could get a perfect 1920x1080 result.]  This is the exception to Nyquist: if a signal varies at the sampling rate, and the sampling coincides with that variation, a perfect sampling can be done:  synchronous sampling.
Yes, and no. If you do this process, you will certainly get a perfect photo of the original card. There are some important things to remember here:

1) The sampling process you are doing (using an imaging sensor) averages out the signal (image) over the sample period (pixel). In audio, the signal is sampled at a single instant - the signal between these instants is discarded.

2) The interpolation process is different. Viewing the image on a screen does a sort of zeroth-order hold on the signal - the value is held over the output sample period. In audio (and printers) the signal is interpolated between sampling instants. In audio, this is done with Sinc interpolation (or a low-pass filter, which is mathematically equivalent).

So this depends very strongly on your definition of sampling. The one most DSP uses, and the one the DFT depends on, requires that the samples are related to the original signal by sinc interpolation (or an ideal low pass filter).

Putting this in more familiar terms, if you created a square wave at 44.1KHz, and you modulated the height of the top step and independently the bottom step of each of the square wave cycles to convey data, you could perfectly recover that data by sampling the square wave at 44.1KHz locked in phase to the middle of each step: synchronous sampling.  If you could not do synchronous sampling you would need to sample at over 88.2KHz to recover the encoded data [actually for an encoded wave of such complexity and precision you might need quite a bit more than 88.2KHz of conventional asynchronous sampling].
Yes, you can do this. No, the DFT won't do anything sensible with the signal so produced - neither would conventional upsampling procedures, conventional digital filters, or conventional DACs.

The frequency analysis algorithms built into cool edit pro etc are not designed to display frequencies above fs/2.
Because the sets of samples that cool edit deals with by definition contain no frequencies above fs/2. Cool edit makes the assumption that the samples were produced from a lowpass signal - not a bandpass signal.

The concept of frequency of a wave created by a random number generator is a difficult concept.
Yes, but it is extremely well defined for digital signals, via the Wiener-Khinchine theorem (or Einstein-Weiner-Khintchine depending on the book) to the autocorrelation function - a simple function of the original samples. It's difficult conceptually, but certainly not hazy mathematically.

I know it can be a difficult concept to grasp - but time varying signals, random signals (provided they are time limited), and all sorts of other non-sinusoidal signals fit just perfectly into this scheme.
A random wave really has no frequency.  Any readout of frequency is as a result of chance.

Random waves have very well defined power spectral densities (PSDs). Talking about their frequency isn't any more interesting than talking about the frequency of the Motorhead song "Ace of Spades". Talking about the power spectral density of both of these things is interesting, however.
For short periods (of sufficient duration to be measured) the random wave behaves in a similar manner to a continuous wave of a particular frequency.
Random discrete-time waves (with N samples) behave like the sum of N sine waves equally spaced in frequency from  -fs/2 to fs/2 Hz, with randomly scrambled phase, weighted by the power spectrum of the chosen noise signal. This much we know from the definition of discrete time signals and the discrete Fourier transform. Sure, you might get lucky and find ten consecutive points that you can fit a single sine to - but that doesn't tell you much about the underlying signal.

An extremely quickly changing waveform cannot be recognized by the frequency analysis algorithm.  So my example above of +1, -1, +1, -1 would be ignored by the analysis algorithm, as it has no normal meaning in a 44.1KHz asynchronous sampling environment, even though if listened to by a bat would be at 44.1KHz!
By frequency analysis algorithm, do you mean the discrete fourier transform? Or do you mean the short-time Fourier transform (like a sonogram)? In both cases this signal is an edge case. It's discrete fourier transform (in the commonly used form) will yield [0, 0, 2, 0]. If you fed this signal (or a longer extension of the pattern) to an ideal DAC, you would get a 22050Hz sine wave at the output. This is simply because these samples correspond to the samples of a 22050Hz sine wave with a particular phase. But please don't get fixated on the critically sampled case.

• 2Bdecided
• Developer
filtering, dither, and noiseshaping
##### Reply #78 – 25 March, 2008, 11:20:21 AM
MLXXX,

Click "FAQ" (top right)

Click "SACD, DVD-A, Vinyl, and Cassette"

Come back when you've digested them.

Cheers,
David.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #79 – 25 March, 2008, 12:16:22 PM
David,
I'll take that to mean you have tired of my posts. That is fine.  You are free to ignore them or skim them quickly and pass on.  I think though that you may be intepreting my queries as challenges to conventional theory.  They are not.  I am simply trying to understand dither.  If dither is as good as it appears it is, there seems little reason to use 24 bits in a released version of audio unless the audio source is at a very high SNR or rthe final mix is at significantly less than 0dB.  This is an important issue in my home forum (DTV Australia) and home cinema forums such as AVS, as many people are clamouring for 24-bit Blu-ray audio whereas it appears all they need is well dithered 16-bit audio.

An extremely quickly changing waveform cannot be recognized by the frequency analysis algorithm.  So my example above of +1, -1, +1, -1 would be ignored by the analysis algorithm, as it has no normal meaning in a 44.1KHz asynchronous sampling environment, even though if listened to by a bat would be at 44.1KHz!
By frequency analysis algorithm, do you mean the discrete fourier transform? Or do you mean the short-time Fourier transform (like a sonogram)? In both cases this signal is an edge case. It's discrete fourier transform (in the commonly used form) will yield [0, 0, 2, 0]. If you fed this signal (or a longer extension of the pattern) to an ideal DAC, you would get a 22050Hz sine wave at the output. This is simply because these samples correspond to the samples of a 22050Hz sine wave with a particular phase. But please don't get fixated on the critically sampled case.

Cabbagerat, many thanks for your various explanations in your immediately preceeding post.  I think I understand them, at least broadly.

Concerning the last topic you covered, which I have reproduced above, an ideal DAC will filter the ouput to create an interpolated wave.  If fed a +1,-1,+1,-1, ...  digital signal, the interpolation will I presume yield zero or close to zero.  The 44.1 KHz signal would at the very least be muffled.

I note you say 'Random waves have very well defined power spectral densities (PSDs).'.

My understanding is that digital dither is not constrained before being added to the signal to be dithered.

I am not sure how this is accounted for in a power spectral density graph of a digitally encoded waveform.  Presumably a short burst of +!, -1,+!-1 does not appear on the graph but is ignored.  The graph presumably ceases at fs/2.

There is a certain limitation by definition here.  If we define digital sampling to represent waveforms from 0Hz to fs/2 then by definition that is all we have.  We cannot within that scheme have a meaning for a rapidly changing stream encoded as +1,-1,+1,_1.  Yet that is a possible output of a white noise generator over a short period of successive samples; unless we take steps to filter it out.

Anyway I'll do some of the reading 2Bdecided suggests (though I suspect doing so will not  throw much light on specific questions I have raised in my last few posts).  Cheers.

• cabbagerat
filtering, dither, and noiseshaping
##### Reply #80 – 25 March, 2008, 12:41:27 PM
I am simply trying to understand dither.  If dither is as good as it appears it is, there seems little reason to use 24 bits in a released version of audio unless the audio source is at a very high SNR or recorded at significantly less than 0dB.  This is an important issue in my home forum (DTV Australia) and home cinema forums such as AVS, as many people are clamouring for 24-bit Blu-ray audio whereas it appears all they need is well dithered 16-bit audio.
I think it's really good that you are trying to understand dither. People are often too quick to jump on the "16bit sucks" bandwagon. Unfortunately, at the moment, your understanding of dither seems to be blocked by a misunderstanding of some of the concepts of discrete-time signal processing.

Cabbagerat, many thanks for your various explanations in your immediately preceeding post.  I think I understand them, at least broadly.
That's good - but don't take my word for all of this stuff. There is good information on the topics I have discussed available freely on the internet, and in books. I would recommend looking at some of the free resources available.

Concerning the last topic you covered, which I have reproduced above, an ideal DAC will filter the ouput to create an interpolated wave.  If fed a +1,-1,+1,-1, ...  digital signal, the interpolation will I presume yield zero or close to zero.  The 44.1 KHz signal would at the very least be muffled.

No, an ideal DAC will reproduce a 22.05kHz sine wave with a pi/2 phase offset. Seriously, though - this is an extremely borderline case, and isn't closely related to the problem of the spectra of noise signals.

I note you say 'Random waves have very well defined power spectral densities (PSDs).'.

My understanding is that digital dither is not constrained before being added to the signal to be dithered.

I am not sure how this is accounted for in a power spectral density graph of a digitally encoded waveform.  Presumably a short burst of +!, -1,+!-1 does not appear on the graph but is ignored.  The graph presumably ceases at fs/2.
Of course it isn't ignored. The power-spectral density (defined for all finite length signals of finite length that satisfy the dirichlet criteria) is the Fourier transform of the autocorrellation function. For white noise, the autocorellation function approaches Delta[n, 0] (where Delta is the Kronecker delta) as the signal length approaches infinity. The PSD therefore approaches F[w] = 1 as the signal length approaches infinity (this is known as the localization property of the discrete Fourier transform).

When you say "digital dither is not constrained" you are missing the fact that it is sampled - therefore it is bandlimited by definition. Apply a digital low-pass filter with a cutoff of fs/2 if you want, but don't be surprised when you get back exactly the same samples you put into the filter.

There is a certain limitation by definition here.  If we define digital sampling to represent waveforms from 0Hz to fs/2 then by definition that is all we have.  We cannot within that scheme have a meaning for a rapidly changing stream encoded as +1,-1,+1,_1.  Yet that is a possible output of a white noise generator over a short period of successive samples; unless we take steps to filter it out.
Yes, we have a meaning for those samples - just as we have a meaning for any set of samples. The meaning is defined by the interpolation formula. By definition, you can find a bandlimited function which, when sampled, would produce *any* given (finite) sample values.

Anyway I'll do some of the reading 2Bdecided suggests (though I suspect doing so will not  throw much light on specific questions I have raised in my last few posts).  Cheers.
I suspect it would throw a lot of light on your questions. What you are asking seems to come from a fundamental misunderstanding of the principles of digital signal processing. I am happy to answer the questions that you do have - but as 2Bdecided suggested some reading might get you to the answer quicker than I can.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #81 – 25 March, 2008, 12:58:43 PM
When you say "digital dither is not constrained" you are missing the fact that it is sampled - therefore it is bandlimited by definition. Apply a digital low-pass filter with a cutoff of fs/2 if you want, but don't be surprised when you get back exactly the same samples you put into the filter.

That presumably would be because the digital filter would not recognise the +1,-1,+1,-1 encoding as representing a 44.1KHz signal.  You have suggested that +1,-1,+1,-1 encoding would be interpteted as a 22.05Khz signal.  I note that a steady low amplitude 22.05KHz signal might be encoded as +1,0,-1,0 if in phase and -1,0,+1,0 if 180 degrees out of phase.  It still seems to me we have a limitation by definition.  Anyway I must log off and get some sleep. - Cheers, MLXXX

• greynol
• Global Moderator
filtering, dither, and noiseshaping
##### Reply #82 – 25 March, 2008, 01:01:56 PM
You have suggested that +1,-1,+1,-1 encoding would be interpteted as a 22.05Khz signal.

Because with a sample rate of 44.1 kHz, alternating +1, -1 is a 22.05 kHz signal!
Is 24-bit/192kHz good enough for your lo-fi vinyl, or do you need 32/384?

• MLXXX
filtering, dither, and noiseshaping
##### Reply #83 – 25 March, 2008, 01:09:46 PM
Because with a sample rate of 44.1 kHz, alternating +1, -1 is a 22.05 kHz signal!

Is it?  I'd have thought that a 44.1KHz signal would give samples of +1,-1 etc if synchronously sampled in phase at 44.1Khz; and I'd have thought a 22.05KHz signal would give samples of +1,0,-1,0 etc (if sampling was kept in phase with peaks and zero crossings of the waveform).

• greynol
• Global Moderator
filtering, dither, and noiseshaping
##### Reply #84 – 25 March, 2008, 01:12:02 PM
That would be an 11.025 kHz signal!

Sounds like you need to do a little more research into discrete-time sampling.
Is 24-bit/192kHz good enough for your lo-fi vinyl, or do you need 32/384?

• MLXXX
filtering, dither, and noiseshaping
##### Reply #85 – 25 March, 2008, 01:33:08 PM
Yes greynol, it appears I typed far too late into the night and overlooked a very basic issue. Thanks.

• 2Bdecided
• Developer
filtering, dither, and noiseshaping
##### Reply #86 – 26 March, 2008, 09:17:40 AM
David,
I'll take that to mean you have tired of my posts.
No, I'm just watching you grope around in the dark, and I'm trying to hand you a torch. Trust me, it'll be more use to you long term, than having five HA members who know the way, lead you around in the dark!

Cheers,
David.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #87 – 26 March, 2008, 11:00:27 AM
... as far as I can see you should be able to do something like this in GNU Octave (freely available on Windows, Linux and Mac) or MATLAB to get what you want:
Code: [Select]
`seconds = 1;rate = 44100;sz = seconds*rate;x=(rand(1,sz)+rand(1,sz)-1)/32768;wavwrite('out.wav', x, rate, 24);`

I downloaded Octave Forge Windows and that version of Octave would only support a maximum of 16 bit encoding for the wavwrite command.  So I had to modify the code a little.  The following gave me a 1 second sample at 44.1KHz 16 bits:

[blockquote]seconds = 1;
rate = 44100;
sz = seconds*rate;
x=(rand(sz,1)+rand(sz,1)-1)/128;
wavwrite('out.wav',x,rate,16)[/blockquote]

This is fascinating stuff for me, and I'll have to look into it some more. It's years since I've played around with this type of high level programming language.

No, I'm just watching you grope around in the dark, and I'm trying to hand you a torch. Trust me, it'll be more use to you long term, than having five HA members who know the way, lead you around in the dark!

Nicely put.

I am normally more a 'work it out or for myself' individual.  However internet forums can be quite tempting for someone with a specific query.  [I am getting close to the point where I will have to report back in a fairly bold manner that 24-bit distribution media have very little practical advantage over well dithered 16-bit distribution media, even when listened to with a high quality home cinema setup.]

• 2Bdecided
• Developer
filtering, dither, and noiseshaping
##### Reply #88 – 26 March, 2008, 11:29:08 AM
Contrary to a lot of what you'll read on HA, I think more than 16-bits could be useful in a home cinema environment.

If you really want to maintain the transient peaks of the waveforms (if only the music industry did!), and you want a huge amount of subsonic bass (ignoring the dedicated channel available for that for one moment), and you want dialogue at a reasonable level, and you want the noise floor below that of a dedicated listening room at all frequencies, and you don't want to have too much noise shaping in there because there are several stages of digital processing (lossy coding, level matching, delay, speaker EQ, room EQ etc), and you don't really trust all the equipment to be bit perfect, and you want the option of applying DRC to the output or not as you choose, and you want to match source levels in the digital domain without compromising headroom, then you probably want to start with more than 16-bits, and keep more than 16-bits throughout. (You also need a pretty amazing amp and speakers, not to mention very distant neighbours!)

So 16-bits are enough, but it's conceivable that you could throw together a situation where they're not.

Given the cost, which with today's disc media and hardware is so small as to be irrelevant, there's no reason to use "only" 16-bits on new disc media, even though in most situations 16-bits is more than enough.

• pdq
filtering, dither, and noiseshaping
##### Reply #89 – 26 March, 2008, 11:37:57 AM
When applying dither to 16-bit signals, is there an advantage to using 48 kHz vs. 44.1 kHz because it is easier to keep the added noise inaudable?

• krabapple
filtering, dither, and noiseshaping
##### Reply #90 – 26 March, 2008, 12:27:58 PM
Contrary to a lot of what you'll read on HA, I think more than 16-bits could be useful in a home cinema environment.

If you really want to maintain the transient peaks of the waveforms (if only the music industry did!), and you want a huge amount of subsonic bass (ignoring the dedicated channel available for that for one moment), and you want dialogue at a reasonable level, and you want the noise floor below that of a dedicated listening room at all frequencies, and you don't want to have too much noise shaping in there because there are several stages of digital processing (lossy coding, level matching, delay, speaker EQ, room EQ etc), and you don't really trust all the equipment to be bit perfect, and you want the option of applying DRC to the output or not as you choose, and you want to match source levels in the digital domain without compromising headroom, then you probably want to start with more than 16-bits, and keep more than 16-bits throughout. (You also need a pretty amazing amp and speakers, not to mention very distant neighbours!)

My understanding is that most modern AVRs operate in the 24-bit domain anyway, in anything put 'pure direct' modes. Could be wrong about that, and that;s leaving aside that it's not 'full' 24-bit in practice.

But I have to wonder how many listening rooms in practice have a noise floor lower than that offered by dithered, noise-shaped Redbook audio.  Not to mention the noise from the recording itself (if from an analog source).

• 2Bdecided
• Developer
filtering, dither, and noiseshaping
##### Reply #91 – 26 March, 2008, 01:33:33 PM
But I have to wonder how many listening rooms in practice have a noise floor lower than that offered by dithered, noise-shaped Redbook audio.
Almost none, which is why it took me so many "and"s to try to justify it.

Cheers,
David.

When applying dither to 16-bit signals, is there an advantage to using 48 kHz vs. 44.1 kHz because it is easier to keep the added noise inaudable?
There's a double advantage: the bandwidth is slightly wider which means the dither noise/Hz is fractionally lower - and, far more significantly, as you suggest a far greater chunk of the available spectrum is basically inaudible, so a great place to push noise into.

Cheers,
David.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #92 – 26 March, 2008, 06:35:55 PM
Contrary to a lot of what you'll read on HA, I think more than 16-bits could be useful in a home cinema environment.

If you really want to maintain the transient peaks of the waveforms (if only the music industry did!), and you want a huge amount of subsonic bass (ignoring the dedicated channel available for that for one moment), and you want dialogue at a reasonable level, and you want the noise floor below that of a dedicated listening room at all frequencies, and you don't want to have too much noise shaping in there because there are several stages of digital processing (lossy coding, level matching, delay, speaker EQ, room EQ etc), and you don't really trust all the equipment to be bit perfect, and you want the option of applying DRC to the output or not as you choose, and you want to match source levels in the digital domain without compromising headroom, then you probably want to start with more than 16-bits, and keep more than 16-bits throughout. (You also need a pretty amazing amp and speakers, not to mention very distant neighbours!)

So 16-bits are enough, but it's conceivable that you could throw together a situation where they're not.

Given the cost, which with today's disc media and hardware is so small as to be irrelevant, there's no reason to use "only" 16-bits on new disc media, even though in most situations 16-bits is more than enough.

An extremely useful response for my particular purposes.

Re the last para, with High Definition Media (Blu-ray, and the no longer continuing HD-DVD format) it has been common to include audio tracks in several languages.  Particularly if a lossless audio codec is used (as it is sometimes for the main audio track), space on the HDM disc can become a critical issue.  A decision could be made in compiling the source material to use a 16 bit mix in preference to 24 (assuming a 24 bit mix is actually available for the transfer to HDM) in order to conserve space.

Here is a link to audio formats of a number of released Blu-ray discs: AVS: Unofficial Blu-ray Audio and Video Specifications Thread .  A large number of the discs use "LPCM (uncompressed) 16-bit/48kHz".

• Woodinville
filtering, dither, and noiseshaping
##### Reply #93 – 27 March, 2008, 01:20:08 AM
But I have to wonder how many listening rooms in practice have a noise floor lower than that offered by dithered, noise-shaped Redbook audio.
Almost none, which is why it took me so many "and"s to try to justify it.

Very true, and let's not forget 6dB SPL with a flat white spectrum, 20Hz to 20kHz is what the atmosphere, by being made of individual molecules, actually creates at your eardrum (yes, there is "shot noise" like effects from the individual molecules, yes it's that energetic).

Getting below that kind of noise floor in any one critical band or ERB really kinda-sorta defines not to useful in the real world.
-----
J. D. (jj) Johnston

• hellokeith
filtering, dither, and noiseshaping
##### Reply #94 – 27 March, 2008, 03:14:55 PM
Woodinville,

In regards to filtering/dithering/noise shaping, how does Vista handle various operations like volume control, eq (in WMP), SRC, delivery to sound card, etc?

• Woodinville
filtering, dither, and noiseshaping
##### Reply #95 – 28 March, 2008, 04:26:58 PM
Woodinville,

In regards to filtering/dithering/noise shaping, how does Vista handle various operations like volume control, eq (in WMP), SRC, delivery to sound card, etc?

Volume control, src are both float. WMP will use float for some EQ and fix for others (sorry, legacy systems are fun, fun, fun).

Dither is applied, always, after the float pipeline.  Once.  Not sure what you mean by filtering, no filtering is done except as needs be done for SRC.
-----
J. D. (jj) Johnston

• DualIP
filtering, dither, and noiseshaping
##### Reply #96 – 29 March, 2008, 06:35:04 AM
[quote name='Woodinville' date='Mar 28 2008, 22:26' post='555548']
[quote name='hellokeith' post='555317' date='Mar 27 2008, 12:14']
Not sure what you mean by filtering, no filtering is done except as needs be done for SRC.
[/quote]
EQ is obvious a filter! Even amplification is mathematical a filter, and, when used on integers without dither, can serious degrade signal quality.

• MLXXX
filtering, dither, and noiseshaping
##### Reply #97 – 29 March, 2008, 07:07:21 AM
Even amplification is mathematical a filter, and, when used on integers without dither, can serious degrade signal quality.

That is certainly true if the result of the processing is limited to 16 bits.

But how serious a problem is it if the result of the processing is stored as a 24-bit integer after a one step operation, e.g. an operation consisting of (a) one step of equalisation, or (b) one step of amplification?  I had assumed the impact would be negligible.

• Woodinville
filtering, dither, and noiseshaping
##### Reply #98 – 29 March, 2008, 06:34:24 PM
EQ is obvious a filter! Even amplification is mathematical a filter, and, when used on integers without dither, can serious degrade signal quality.

Yes, oh bright one, but EQ is applied in the PLAYER. You said "after the player".
-----
J. D. (jj) Johnston

• hellokeith
filtering, dither, and noiseshaping
##### Reply #99 – 30 March, 2008, 11:34:31 PM

Woodinville,

In regards to filtering/dithering/noise shaping, how does Vista handle various operations like volume control, eq (in WMP), SRC, delivery to sound card, etc?

Volume control, src are both float. WMP will use float for some EQ and fix for others (sorry, legacy systems are fun, fun, fun).

Dither is applied, always, after the float pipeline.  Once.  Not sure what you mean by filtering, no filtering is done except as needs be done for SRC.

Thanx!

Lastly, what is the purpose or reasoning of the Advanced > Default Format ?

"Select the sample rate and bit depth to be used when running in Shared Mode"

Why does this need to be set at all?