Skip to main content

Topic: "Optimal" Predictor (Read 2964 times) previous topic - next topic

0 Members and 1 Guest are viewing this topic.
  • pest
  • [*][*][*]
"Optimal" Predictor
As some of you may have noticed i'm also working on a lossless-audio-codec.
The core prediction uses a joint-channel cholesky decomposition every k-samples.
The decomposition is applied on a covariance matrix which is updated every sample in the following way

Code: [Select]
  for (int i=0;i<=maxorder;i++)
  {
    for (int j=i;j<=maxorder;j++) covar[i][j] = decay * covar[i][j] + history[i] * history[j]
  }


You can see there's a learning factor 'decay' involved. The problem is
that for some samples old-data seems to be more important than recent data
leading to a large encoding-cost if you use a relative high factor that works good
on most samples. I've tried to exploit some statistical properties as prediction gain
or variance of the input to estimate a optimal learning factor but this seems
to be more complicated than i first thought. Does anybody know of a good
way to estimate this property?

any help is appreciated

pest

edit: typo in the codebox
  • Last Edit: 16 November, 2006, 08:13:20 AM by pest

  • SebastianG
  • [*][*][*][*][*]
  • Developer
"Optimal" Predictor
Reply #1
Looks like you want to do backward adaptive prediction. Are you sure you want a decoder to perform the same calculations (cholesky every k samples)? This is pretty time consuming, isn't it? Also, if you want your compressed files to be machine-independent you need to make sure, that the calculations you perform give the exact same results on every machine. (This is not the case if you rely on an FPU for these calculations.)

Backward adaptive prediction is not my specialty, sorry. Perhaps a sliding window approach helps.
  • Last Edit: 15 November, 2006, 04:25:15 PM by SebastianG

  • pest
  • [*][*][*]
"Optimal" Predictor
Reply #2
Looks like you want to do backward adaptive prediction. Are you sure you want a decoder to perform the same calculations (cholesky every k samples)? This is pretty time consuming, isn't it?


Using a relative low-order (8 intra/4 inter-channel). On top of that predictor is a
cascading lms-structure with a maximum order of 1280. This is running at 1x on my Athlon1200,
so yes, very slow indeed.

Quote
Also, if you want your compressed files to be machine-independent you need to make sure, that the calculations you perform give the exact same results on every machine. (This is not the case if you rely on an FPU for these calculations.)


The current version works flawless, but it's not machine-independent, you're right. It's mainly a proof
of concept and surpasses LA already. Do you know a way to make the FPU code machine-independent
or is this simply not possible?

Quote
Perhaps a sliding window approach helps.


I think the current approach is like a window with logarithmic decaying powers.
Or do you think of something different?

"Optimal" Predictor
Reply #3
The current version works flawless, but it's not machine-independent, you're right. It's mainly a proof
of concept and surpasses LA already. Do you know a way to make the FPU code machine-independent
or is this simply not possible?


Well, you could try to detect various FPU problems at compile time. I mean like non-IEEE floating point unit with excess precision, testing various trig functions you're using, and such.
Also strengthen it with install-time losslessness test.
ruxvilti'a

  • pest
  • [*][*][*]
"Optimal" Predictor
Reply #4
I've recently realised that the solution to this problem leads to complex
algorithms involving learning functions and neuronal networks
my brute-force solution is to iterate with a bisection method
to a local minima of the quadratic error and use this learning rate
for the complete block. this should! be better than nothing.
good to know i'm at the first semester 
  • Last Edit: 21 November, 2006, 11:54:22 AM by pest