Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: AMR-WB Updating states of filters (Read 3772 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

AMR-WB Updating states of filters

The codec description (26.190, section 5.10) states 'An update of the states of the synthesis and weighting filters is needed in order to compute the target signal in the next subframe'

So, if we obtain new LP coefficients, why do we perform an interpolation at each subframe of the ISP to obtain for each subframe different LP coefficients? Maybe I am missing something but I don't understand why we need the memory update or why we perform the interpolation, one of those has no sense for me...
Any clue?

Many thanks.

AMR-WB Updating states of filters

Reply #1
If you look at the asymmetric window used, it peaks on a particular subframe (subframe 4 I think).  They do the LP analysis on subframe 4, and then interpolate those to get the LP coefficients for the other subframes (1,2,3).  The reason they use smaller excitation frames than the LPC frames is because it has been found to give better synthesis quality.

They talk a bit about this in the following paper (there are probably better references but this one is good enough):

B. S. Atal, R. V. Cox, and P. Kroon, "Spectral quantization and interpolation for CELP coders," Proc. ICASSP'89, pp. 69 - 72, May 1989.

AMR-WB Updating states of filters

Reply #2
Thank you for your reply and for the paper.
But I still have the doubt... The interpolation is performed for obtaining the LP filter coefficients for each subframe (it's true what you say, the window is centered in the fourth subframe). With the interpolation, the change between frames is smoother. Also, the size of the subframes is quite better for dealing with pitch issues and codebooks.

But the thing is, if we have already the LP synthesis filter for the four subframes, why do we need to perform a memory update of the filters' states? Doing so, we finish by having two versions of the LP filters for each subframe, so which one shall we use?
As you can see in section 5.10 of 3GPP 26.190, this memory update of the filters is performed with the perceptual error between the orginal speech and the synthetised one. I don't understand why this is needed, we do already have the LP filters for each subframe after the interpolation... 

Thank you again.