Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Lower-latency realtime CQT/VQT (Read 2394 times) previous topic - next topic - Topic derived from Logarithmic frequency...
0 Members and 1 Guest are viewing this topic.

Lower-latency realtime CQT/VQT

Do post-processing with CQT kernel similar to Brown-Puckette algorithm but the kernel is specified directly in frequency domain (using Nuttall window), and also it is actually not constant Q at low frequency.
Actually, I've already implemented something like this above on one of my own spectrum analyzer project but it has huge latency when FFT length is large enough (more than 8192 samples), when not using delay compensation thing (which actually delays the audio output to match with the spectrum analyzer unlike in foobar2000)

So the question is it possible to have Brown-Puckette CQT/VQT (but with kernels specified directly on the frequency-domain) that where shorter kernels (wider in frequency-domain) comes first before the corresponding longer (narrower in frequency-domain) one, much like IIR filter bank (which naturally have longer reaction delay on narrower bandwidths) and sliding DFT?

Re: Lower-latency realtime CQT/VQT

Reply #1
Yes, its certainly possible to have very small delay/latency by using CWT and convolutions, but that will give super resolution at both time and frequencies and that is extremely huge buffer of data in one second to handle for such primitive scopes.

Re: Lower-latency realtime CQT/VQT

Reply #2
Yes, its certainly possible to have very small delay/latency by using CWT and convolutions, but that will give super resolution at both time and frequencies and that is extremely huge buffer of data in one second to handle for such primitive scopes.
Isn't that incorrect? And if so, why would you say that use of CWT and convolution techniques would "magically" improve latency of FFT-based CQT (with kernels defined directly in the frequency domain) on higher FFT sizes like 32768 samples? And do you mean latency by a delay between actual audio and visual response to it (which could mean something like a linear-phase EQ with long impulse response don't work well in live sound and recording sessions even though it can be compensated by plugin delay compensation in DAWs)?

I'm sure that time alignment (which can be illustrated in this Desmos graph, reaction alignment of -1 mean left alignment, 0 means center, +1 mean right) of CQT kernels also affect latency of realtime CQT (assuming the delay compensation is either disabled or doesn't apply on live input signals); the center alignment have "latency = inputSize/2", whereas left (longer kernels reacts sooner) have "latency = inputSize" and right (shorter kernels reacts sooner than longer ones) don't even need delay compensation at all to sync up the CQT/VQT visualization to audio