Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: LP coefficient computation (Read 8599 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

LP coefficient computation

I developed LPCQR that may be of interest for developers of lossless audio codecs that use forward adaptive linear prediction.

The whole point of LPCQR is to compute the best possible set of quantized linear prediction coefficients. Visit the page I linked above for more details.

Note: The source code is released under the same license as FLAC's source code.


Have fun,
SG

LP coefficient computation

Reply #1
Hi Sebastian,

thank you!

I have downloaded it and will give it a try, when i have time.

  Thomas

LP coefficient computation

Reply #2
Hi Sebastian,

can you elaborate a little bit more about the quantization part?
If I understood you correctly you involve quantization into the QR-decomposition,
but as far as I know you must do a complete decomposition in order to retrieve the coefficients.

LP coefficient computation

Reply #3
For each sample-to-be-predicted you have a simple linear equation. Usually these systems are overdetermined (like 1024 equations for 10 unknown prediction coefficients). To compute a solution (x) of Ax=b in the least squares sense one can factor the system matrix A into A=QR where Q^T*Q=I and R is an upper triangular matrix. 'x' can be computed via solving Rx = Q^T*b.

The good news is that one doesn't need to store Q nor the full matrix A. The code I supplied does all the work on-the-fly. The last step is solving the triangular system. This is done by computing the last unknown coefficient first, then the 2nd last using the previous result, then the 3rd last coefficient and so on.

Quantization: Instead of quantizing the coefficients after the complete solution of the linear equation system is computed quantization is part of the routine that solves the triangular equation system. Once a coefficient has been computed it is also quantized and the quantized coefficient is then reused to compute the other remaining coefficients. This way one can account for quantization errors and keep the sum of squared prediction errors small.

Note: I didn't test the code thoroughly. I compared the solutions of some randomly generated problem instances with the solution octave computed and they matched. So far so good. :-)

Cheers!
SG

LP coefficient computation

Reply #4
Quote
The good news is that one doesn't need to store Q nor the full matrix A. The code I supplied does all the work on-the-fly. The last step is solving the triangular system. This is done by computing the last unknown coefficient first, then the 2nd last using the previous result, then the 3rd last coefficient and so on.


This is really new to me. I've currently no idea how you do this, but I'm really stressed,
so hopefully in 2 weeks I'm able to spend some time on this subject.

Quote
Quantization: Instead of quantizing the coefficients after the complete solution of the linear equation system is computed quantization is part of the routine that solves the triangular equation system. Once a coefficient has been computed it is also quantized and the quantized coefficient is then reused to compute the other remaining coefficients. This way one can account for quantization errors and keep the sum of squared prediction errors small.


After my exams I'll take a look at the code. Thanks for your work.

LP coefficient computation

Reply #5
Copyright Josh Coalson?

LP coefficient computation

Reply #6
This is really new to me. I've currently no idea how you do this, but I'm really stressed,
so hopefully in 2 weeks I'm able to spend some time on this subject.


The key is to do the QR decomposition as the "equations arrive". I simply keep a triangular system for all the time. Every time a new equation in form of a row vector is added the triangular structure is recovered via appropriate Givens rotations:
Code: [Select]
     Rx = b      new row added        qrStep              R'x = b'

X X X X | X       X X X X | X       X X X X | X       X X X X | X
0 X X X | X       0 X X X | X       0 X X X | X       0 X X X | X
0 0 X X | X  -->  0 0 X X | X  -->  0 0 X X | X  -->  0 0 X X | X
0 0 0 X | X       0 0 0 X | X       0 0 0 X | X       0 0 0 X | X
                  X X X X | X       0 0 0 0 |[X]

"0"  = zero
"X"  = any
"[X]" = projection error

To keep track of the prediction error the squared [X] are accumulated (variable: sse = "sum of squared errors").

Copyright Josh Coalson?

Well, not really. I simply used some FLAC files as template. The plan was to keep references to the libFLAC license. I didn't bother to change the copyright line.

Cheers!
SG

LP coefficient computation

Reply #7
Quote
The key is to do the QR decomposition as the "equations arrive". I simply keep a triangular system for all the time. Every time a new equation in form of a row vector is added the triangular structure is recovered via appropriate Givens rotations:
Code: [Select]
     Rx = b      new row added        qrStep              R'x = b'

X X X X | X       X X X X | X       X X X X | X       X X X X | X
0 X X X | X       0 X X X | X       0 X X X | X       0 X X X | X
0 0 X X | X  -->  0 0 X X | X  -->  0 0 X X | X  -->  0 0 X X | X
0 0 0 X | X       0 0 0 X | X       0 0 0 X | X       0 0 0 X | X
                  X X X X | X       0 0 0 0 |[X]

"0"  = zero
"X"  = any
"[X]" = projection error


Thanks for the explanation. Now it's clear, even for me! ;-)
How about avoiding the givens rotations and using a housholder transformation?

LP coefficient computation

Reply #8
How about avoiding the givens rotations and using a housholder transformation?

In this special case a Householder transform has no advantage. A Householder transform is only slightly faster if you need to zero more than one coefficient in a column. In my case it's only the matrix' last row that has to be eleminated. So, a Givens rotation is preferable.

However, one could collect more than n rows and perform a QR transform only when the matrix buffer is full to reduce the system to n equations again. Then, a Householder transform would make sense because you can save some square roots this way.

Cheers!
SG

 

LP coefficient computation

Reply #9
Here's a new incarnation. It goes way beyond what the old version did. Don't know if it's worth the trouble, though.

Still, if you like to test it or incorporate it into some lossless audio encoder just download the ZIP. The C code is released under the two-clause BSD license.

Cheers,
SG