Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AES 143rd NY 2017 (Read 1683 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

AES 143rd NY 2017

Anyone attend?
Seems there were some rather interesting exchanges at a certain "Hi Re$" session, I think this one
Bruno Putzeys take https://www.facebook.com/permalink.php?story_fbid=2044350049128525&id=100006606498666
Quote
This isn't a prelude to suddenly becoming active on FB but I felt I had to share this.
Yesterday there was an AES session on mastering for high resolution (whatever that is) whose highlight was a talk about the state of the loudness war, why we're still fighting it and what the final arrival of on-by-default loudness normalisation on streaming services means for mastering. It also contained a two-pronged campaign piece for MQA. During it, every classical misconception and canard about digital audio was trotted out in an amazingly short time. Interaural timing resolution, check. Pictures showing staircase waveforms, check. That old chestnut about the ear beating the Fourier uncertainty (the acoustical equivalent of saying that human observers are able to beat Heisenberg's uncertainty principle), right there.
At the end of the talk I got up to ask a scathing question and spectacularly fumbled my attack*. So for those who were wondering what I was on about, here goes. A filtering operation is a convolution of two waveforms. One is the impulse response of the filter (aka the "kernel"), the other is the signal.
A word that high res proponents of any stripe love is "blurring". The convolution point of view shows that as the "kernel" blurs the signal, so the signal blurs the kernel. As Stuart's spectral plots showed, an audio signal is a much smoother waveform than the kernel so in reality guess who's really blurring whom. And if there's no spectral energy left above the noise floor at the frequency where the filter has ring tails, the ring tails are below the noise floor too.
A second question, which I didn't even get to ask, was about the impulse response of MQA's decimation and upsampling chain as it is shown in the slide presentation. MQA's take on those filters famously allows for aliasing, so how does one even define "the" impulse response of that signal chain when its actual shape depends on when exactly it happens relative to the sampling clock (it's not time invariant). I mentioned this to my friend Bob Katz who countered "but what if there isn't any aliasing" (meaning what if no signal is present in the region that folds down). Well yes, that's the saving grace. The signal filters the kernel rather than vice versa and the shape of the transition band doesn't matter if it is in a region where there is no signal.
These folk are trying to have their cake and eat it. Either aliasing doesn't matter because there is no signal in the transition band and then the precise shape of the transition band doesn't matter either (ie the ring tails have no conceivable manifestation) or the absence of ring tails is critical because there is signal in that region and then the aliasing will result in audible components that fly in the face of MQA's transparency claims.
Doesn't that just sound like the arguments DSD folks used to make? The requirement for 100kHz bandwidth was made based on the assumption that content above 20k had an audible impact whereas the supersonic noise was excused on the grounds that it wasn't audible. What gives?
Meanwhile I'm happy to do speakers. You wouldn't believe how much impact speakers have on replay fidelity.
________
* Oh hang on, actually I started by asking if besides speculations about neuroscience and physics they had actual controlled listening trials to back their story up. Bob Stuart replied that all listening tests so far were working experiences with engineers in their studios but that no scientific listening tests have been done so far. That doesn't surprise any of us cynics but it is an astonishing admission from the man himself. Mhm, I can just see the headlines. "No Scientific Tests Were Done, Says MQA Founder".
Very interesting comments by Paul Frindle as well (too bad if you'r not on FB).
Finally a bit of pushback on MQA and other assorted nonsense?

cheers,

AJ
Loudspeaker manufacturer





Re: AES 143rd NY 2017

Reply #5
From the fb posts it seems several people are not happy with the state of AES.
Lately it seems the AES is seen as a buy in place for big players to do marketing in the name of science.
The Meridian paper about filter audibility with many unanswered things camouflaged as AES paper comes to mind. Fits MQA absolutely.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: AES 143rd NY 2017

Reply #6
From the fb posts it seems several people are not happy with the state of AES.
Yes, Paul Frindles comments in particular.
If "Hi Re$" is largely a charade, how can one justify this? google cache link
Loudspeaker manufacturer