This leaves us with degradation of fidelity as a result of processing unnecessary ultrasonics. Unfortunately the very same tests (Meyer and Moran) we used to make our case previously now mitigate against us. If a 16/44k1 bottleneck is inaudible in a system avowedly processing ultrasonics, then the ultrasonics cannot reasonably be said to have (audibly) degraded the sound.
Not to be nitpicky, but IMHO you only dither if you resample digitally?
I'm happy to see a truly coherent argument against the necessity for higher bit depths and sample rates. Waste of storage space is a small margin of disadvantage however. The advocates of 24/96 and higher were happy to pay the cost of increased storage space to achieve a perceived advantage, and storage space (or transmission bandwidth) is an increasingly cheap commodity.
Quote from: soulsearchingsun on 06 March, 2012, 05:39:19 PMNot to be nitpicky, but IMHO you only dither if you resample digitally?You only dither if you go from a higher bit depth to a lower one.
You dither whenever resolution is reduced. This includes analogue to digital conversion as a limit case.The analogue input of the ADC ideally contains a noise source at the required level. In practice the front-end electronics and signal chain often provide this noise implicitly, although notnecessarily with the optimal spectral distribution.
I also spent 4 years studying an EE degree, and although it was not especially focused on signal processing, I now work for a large pro audio company.Some of the issues pointed to in this and other posts regarding oversampling and AA filters are not really relevant to the subject at hand, given the technology currently in use. A statement like 'oversampling at 192 kHz' shows a lack of knowledge regarding the kinds of audio converters that have been in use for a good while now. A Delta Sigma ADC running with an Fs of 48 kHz might often be oversampling at 3.072 MHz or 6.144 MHz. Anti aliasing filters that many people have mentioned are implemented digitally inside the converter (no need for external analog filters, which may well exhibit many of the problems mentioned), and actually have extremely good pass band ripple.Look at datasheets for converters from manufacturers such as TI (burr brown) [ti.com], cirrus [cirrus.com] [page 36 here has detailed plots of 48, 96, and 192 kHz pass pand characterisitcs for the device, highlighting the fact that increasing the sampling rate does not improve pass band ripple for this device (also note the scale is 0.02 dB/div)], AKM [asahi-kasei.co.jp], Wolfson micro [wolfsonmicro.com] You will find pass band pass responses that are flat to within less than +/- 0.05 dB over the audible range, and stop band attenuation in excess of 100 dB, whether sampling at 48 kHz or 192 kHz. If you can find anything in actual converter datasheets that points to better converter performance from selecting a higher sampling rate, I would be interested to see it.All in all, the basics of sampling theory don't really help people to understant the real world issues in designing a moden high end audio device. And in the end, surely the proof of the pudding is in the blind tests, that never seem to show that anybody can tell any difference when moving to higher rates? Even if there were a few people who could hear this difference in some perfect listening envirmonment, would it really make sense for everyone else to go out and buy 192 kHz equipment?
I thought he made good points, but he quoted Meyer and Moran. I just think it's better that we're all informed as to the counter-arguments rather than ending up with egg on our faces.
The point has also been made that [in the article] first I argue "ultrasonics hurt fidelity" and then cite M&M, which supposedly undermines the argument because no one could hear a difference. In no way does M&M rebut the assertion that ultrasonics _can_ cause audible distortion. They were using high end setups designed at expense for audiophile-grade frequency extension, and the results show they obviously weren't affected by audible IMD. Am I missing something else?
If some but not all hi-fi's are robust enough not to intermodulate when fed ultrasonics, while other fairly common equipment would be overloaded, degrade sound or even be damaged, then it isn't ill-justified to dub an 88.2 or above format 'harmful'. It is a matter of degree, severity and even of opinion, but just the mere fact that some equipment are off the hook, does not invalidate the labeling.(Remember, tobacco smoking probably kills only a [forty-something percent] minority of its users.)
Oooh, ow. Guess it wasn't that good then, huh?
how does it do damage? (ultrasonics, not smoking...)
Quote from: icstm on 09 March, 2012, 10:16:39 AMhow does it do damage? (ultrasonics, not smoking...)I think he means to the sound, not the equipment. Otherwise you did read the presentation, right?
The article mentions that SACDs often sound better because they are 'mastered better'. This is something i've been wondering about. Is it worth collecting DVD-As and SACDs because thay may be 'mastered better'? Does that mean they compare favorably with original master recording labels like Mobile Fidelity Sounds Labs, DCC and Audio Fidelity, or are they usually the same old victems of the loudness war just shoved into a bigger 24-bit evelope.
I'd think if they were higher quality (re)masters they'd advertise that on the packaging to grab the audiophile market.
Quote from: wakibaki on 06 March, 2012, 07:54:24 PMI thought he made good points, but he quoted Meyer and Moran. I just think it's better that we're all informed as to the counter-arguments rather than ending up with egg on our faces. I'm actually curious as to your specific objection/concern. I've read the various critiques written by detractors of the BAS tests over the years, but too many of those arguments relied on willful obtuseness and eye rolling. I'd like to hear the methodology/implementation critiques from those who nevertheless agreed with the conclusions.The point has also been made that [in the article] first I argue "ultrasonics hurt fidelity" and then cite M&M, which supposedly undermines the argument because no one could hear a difference. In no way does M&M rebut the assertion that ultrasonics _can_ cause audible distortion. They were using high end setups designed at expense for audiophile-grade frequency extension, and the results show they obviously weren't affected by audible IMD. Am I missing something else?
That said, M&M absolutely *should* have described their methods in far more detail in the original article (that information dribbled out later). And they *could* publish subset analysis of data to see if considering only the 'pure' DSD recordings made any difference in their findings. Or release their raw data and let others work it over. Or, preferably, someone could do another test and gather more data.
Excellent, thanks. I'll ask about it at this month's meeting.