Hello,
How is the chunk size determined for audio_chunk object? Is it possible to know the chunk size before processing (on_chunk calling)? For instance for some declarations on the constructor of the object?
Thanx!
You must be prepared to handle any size. The DSP in front of you can output chunks in whatever size it pleases.
Refreshing this old topic to get more info.
Now from the perspective of the developer of that first DSP, should he care about the size of the chunks he/she inserts? Either as a courtesy for the next DSP or for the overall performances?
In my case I may buffer like 1 minute of audio. In some case I want to release it all at once and wonder if I should bother splitting that minute in smaller chunks of say 1024 samples or just insert the full thing at once?
I would say that insert_chunk may fail when allocating on the fly such large buffer but I prefer to check.
Cheers
Everything is expected to handle large chunks so there should be no need to split it. That is also the way every first party component releases buffered contents when requested (happens on DSP setting change).
I don't think worrying about allocation here is necessary. For example quite extreme 192 kHz 6 channel audio would only require 264 MB of memory for a minute.
"only require 264 MB" ! I have to get used to writing code for PCs where there is just too much memory !
Thanks for the answer.
There are also people that like to resample everything to 705600/768000 Hz. Higher is better, right?