Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Yalac - Comparisons (Read 213742 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Yalac - Comparisons

Reply #250
This thread is off-topic 50% of the bloody time! (see file format posts, and previous splits I've made)

I was leaving the Open Office discussion until it had run it's course, especially as I am partly to blame.


Anyway, while I'm here:  Anyone done much Damage testing yet?  Why does nobody post any results here anymore?  I like to see other people's work as well as my own.

I'm hoping to actually perform some Damage tests very soon, and was wondering what others have already tested...
I'm on a horse.

Yalac - Comparisons

Reply #251
I did a bit of damage testing.  I didn't do anything too in depth... just random damage.  It results in missing frames in the playback... just little gaps in the audio.

It might be interesting to wipe out the first or last bit of the file, and see if it still works without the headers and such intact.

Yalac - Comparisons

Reply #252
I'll ready my chainsaw tonight : I'll try doing some damage testing on the lastest version of the encoder.  Won't be much or much interesting, probably, but I'm going to change the files by hand.. Try to generate some valid compressed audio frames with mismatching CRC, or something.  Maybe some obvious (cut-and-paste) steganography, too... (I know, obvious and steganography don't go together)

It's been a while since I ran TAK...

Yalac - Comparisons

Reply #253
It might be interesting to wipe out the first or last bit of the file, and see if it still works without the headers and such intact.
Yes, I was thinking along these lines.  I've done some preliminary tests with Damage and it seems that Yalac/TAK can easily report and ignore.

Let's see what happens when we chop its head and feet off...
I'm on a horse.

Yalac - Comparisons

Reply #254
Let's see what happens when we chop its head and feet off...

"head and feet"! Huh, i could not imagine, that testers could be that much cruel to innocent bits. 
Really. I wasn't prepared.

But thanks for working careful!

Possibly this test is less interesting for you than the earlier tests.

Even more thanks!

  Thomas

Yalac - Comparisons

Reply #255

Monkey for instance is using a range encoder for the compression of the prediction residuals. It is believed to be patent free, but you can not be totally sure.

The "range coder" publication is quite dated and until now nobody has claimed anything. I think it's a safe thing.  (I'd make use of it for any future codec). Besides, in Germany you don't need to worry about that unless you plan to earn money with it, IIRC. (PPL who want to make commercial use of your codec would have to worry, though)

in monkey's the patent "danger" is in his hybrid rice-range coder implementation, which I think is genuinely novel.  but because of submarining you never know.

I have done experiments years back with range coding in FLAC.  range coding itself is for the most part believed to be patent free.  as soon as you start using range coding for losslessly coding the residual of an audio signal, which includes a statistical modelling part, you're in unknown territory, and since it goes in the decoder that is even more worrying.

unfortunate.

Josh

Yalac - Comparisons

Reply #256
in monkey's the patent "danger" is in his hybrid rice-range coder implementation, which I think is genuinely novel.  but because of submarining you never know.

My quite old implementation (from 2000) is quite similar to Monkey's. Possibly a bit more elaborated because of the use of more probability models from which the encoder can choose the best fitting one. For me that seemed to be a logical step if you have started with rice coding and wanted to overcome it's limitations.

Allready two with the same idea, hence i guess that it's quite probable that others had the same or a similar idea and patented it.

I have done experiments years back with range coding in FLAC.  range coding itself is for the most part believed to be patent free.  as soon as you start using range coding for losslessly coding the residual of an audio signal, which includes a statistical modelling part, you're in unknown territory, and since it goes in the decoder that is even more worrying.

unfortunate.


I had little success with adaptive models. Too slow (especially when decoding) and too little advantage over my approach with a limited set of static (order 0) models. But possibly i have not tried it hard enough...

 

Yalac - Comparisons

Reply #257
a simple possible improvement with a rangecoder can be made when encoding the signs.
you need often less than a bit if you for example test the current sign against the last one.
i think that in your approach an adaptive model can improve the current rice-codes
because the static lpc-predictor often lefts out some of the linearity. But as you already
mentioned it's a bit slow. Dmitry Subbotin has a fast public-domain rangecoder by the way.

Yalac - Comparisons

Reply #258
a simple possible improvement with a rangecoder can be made when encoding the signs.
you need often less than a bit if you for example test the current sign against the last one.

You maybe right. I tried it in the early days of Yalac, before i knew about arithmetic compression. Without arithmetic compression it wasn't advantegous because i could not efficiently represent the quite small probability differences. But i will try it again with range coding. Thanks!

But i wouldn't expect more than about 0.3 percent better compression (best case).

mentioned it's a bit slow. Dmitry Subbotin has a fast public-domain rangecoder by the way.

Oh yes, it's fast! For my evaluations i just switched from Michael Schindler's implementation to this one. It's really fast and compresses my data only about 0.03 percent worse.

  Thomas

Yalac - Comparisons

Reply #259

a simple possible improvement with a rangecoder can be made when encoding the signs.
you need often less than a bit if you for example test the current sign against the last one.

You maybe right. I tried it in the early days of Yalac, before i knew about arithmetic compression. Without arithmetic compression it wasn't advantegous because i could not efficiently represent the quite small probability differences. But i will try it again with range coding. Thanks!

But i wouldn't expect more than about 0.3 percent better compression (best case).

Well, a first quick and simple approach improves compression by about 0.04 percent...

Possibly it works better if only one Lpc-Filter is beeing used. But yalac currently sends the signal through up to 4 different filters. If you visually inspect the residuals after the last filter in an audio editor, you will not find many regularities. Ok, a statistical analysis will still find some.

Yalac - Comparisons

Reply #260
Without performance hits?  Every fraction of an inch gets us closer to REAL ULTIMATE POWER!

Tom, you're definitly the best. Even better than a ninja !

Yalac - Comparisons

Reply #261
Quote
Well, a first quick and simple approach improves compression by about 0.04 percent...


this is the sign-encoding? do you use an adaptive model? if, make sure the model
adapts very fast.

i could post a very complex entropy coder, if you like...just to see how things could be done

Yalac - Comparisons

Reply #262
Quote

Well, a first quick and simple approach improves compression by about 0.04 percent...


this is the sign-encoding? do you use an adaptive model? if, make sure the model
adapts very fast.

i could post a very complex entropy coder, if you like...just to see how things could be done


Thanks for your offer. I know a bit about adaptive modelling (i have played with it earlier).

But i doubt that i will use it in Yalac, because

- i am a bit dogmatic: no (complex) adaption on the decoder side. Decoding should be as fast as possible. (From my experience model parameters would need too much space, therefore you had to perform the model adaption in the encoder and decoder).
- The residuals (and signs) after the last filter stage look quite random. If you are using only one prediction filter you will usually find regularities within the residuals (i agree with you), but as i wrote earlier, Yalac is using a cascade of up to 4 filters.

I guess it could be advantegous for some signals to limit yalac's filter chain to only one filter and then use the sign compression afterwards. I suspect that especially low frequency signals sometimes could compress better than when using the whole filter chain without sign compression. This could be an interesting option.

But currently i am a bit to lazy to evaluate this possibility (which would need many changes of my code). Maybe later...

Not to forget: Adaptive range coding could generally be very advantegous for low level files (for instance classic music).

In the meantime i have tried two other optimizations:

1) Improved compression of the filter coefficients. Nice for files which are benefiting from medium or high predictor orders. Results from my primary test file set: TURBO 0.03 percent, FAST 0.06 percent, NORMAL 0.10 percent better compression.

This may not be much, but if you remember, that i thought about implementing (the slower) Range encoding to get about 0.12 percent better compression...

This optimization will have no significant effect on decoding speed.

2) Evaluation of the PreFilter and looking for speed ups. I wrote earlier, that i did not fully understand, why it sometimes helps. Now i know more about it.

The prefilter produced a dramatic compression improvement (more than 3 percent) on some of Joseph Pohm's special files. I could now implement a much faster filter which would achieve nearly the same compression advantage as the old implementation on those files.

But i see no chance for a general speed up of the PreFilter for average files. Sorry...

Yalac - Comparisons

Reply #263
Thomas,

I would expect the DAMAGE command:

DAMAGE file.yaa -f a 1024 -p 1024 2048 -r $571ff

... to damage 1024 bytes in the bytes between 1024 and 2048, but only five bytes appear to be changed.

Is this because the 128 per MB kicks in?

Also, the command line:

DAMAGE file.yaa -e i 24 r 01010101 i 16 r 1010 i 8 -f a 1024 -r $571ff

... fails with "Command line error: - expected".

What am I doing wrong?

If I want to damage right at the end how do I set the first -p value, i.e.: -p <?> -1 ?
I'm on a horse.

Yalac - Comparisons

Reply #264
I would expect the DAMAGE command:

DAMAGE file.yaa -f a 1024 -p 1024 2048 -r $571ff

... to damage 1024 bytes in the bytes between 1024 and 2048, but only five bytes appear to be changed.

Is this because the 128 per MB kicks in?

That's exactly the reason...

Also, the command line:

DAMAGE file.yaa -e i 24 r 01010101 i 16 r 1010 i 8 -f a 1024 -r $571ff

... fails with "Command line error: - expected".

What am I doing wrong?

Nothing. It's my mistake. I did an error while parsing the error definition -e. Sorry! I just fixed the bug and will sent you the new version soon.

If I want to damage right at the end how do I set the first -p value, i.e.: -p <?> -1 ?

Well, you should be able to specify an offset to the file end as first parameter. Unfortunately i have not implemented it yet... I will look at it this today. I wanted to reply first.

Thanks for testing this buggy little thing! Strange to find more errors here than in the average yalac release.

  Thomas

Edit: I have changed the specification of the error positions. You can now specify negative values for positions relative to the file end:

-p -100 -50 Damage between FileSize  - 100 and FileSize - 50
-p -100  -1 Damage between FileSize  - 100 and FileSize - 1 (= FileEnd, because counting starts at 0)

Or should we write -99 0 instead of -100 - 1?

Yalac - Comparisons

Reply #265
Thanks for the response Thomas.

I look forward to the new version of Damage.  I have to say though, I'm finding it hard to come up with anything to foil the Yalac decompressor so far.

0 or -1?  I guess 0 is OK as (I assume) it cannot be used as a starting position.  I.e.: I assume positive numbers run from 1 to n, where n is the filesize.

So -p 1 1024 would be the first 1024 bytes and -p -1023 0 would be the last...

Hmm, looking at it like that -p -1024 -1 looks a better format.

Code: [Select]
   1   2      3    .... n-2  n-1  n
  -n -(n-1) -(n-2) .... -3   -2  -1

Then you would say -p 1 100 is the first one hundred bytes and -p -100 -1 is the last hundred bytes.  To me that seems easier to work with... I think. 
I'm on a horse.

Yalac - Comparisons

Reply #266
Thanks for the response Thomas.

That's the least i can do!

I just sent you the V1.01. I hope it works better now, although i have done it in a hurry...

I look forward to the new version of Damage.  I have to say though, I'm finding it hard to come up with anything to foil the Yalac decompressor so far.

Very good news! Thanks.

0 or -1?  I guess 0 is OK as (I assume) it cannot be used as a starting position.  I.e.: I assume positive numbers run from 1 to n, where n is the filesize.

So -p 1 1024 would be the first 1024 bytes and -p -1023 0 would be the last...

Hmm, looking at it like that -p -1024 -1 looks a better format.

Code: [Select]
   1   2      3    .... n-2  n-1  n
  -n -(n-1) -(n-2) .... -3   -2  -1

Then you would say -p 1 100 is the first one hundred bytes and -p -100 -1 is the last hundred bytes.  To me that seems easier to work with... I think. 

Sorry, it's a bit different now (at least for this version):

-p 0 99 slecets the first 100 bytes,
-p -99 0 selects the last 100 bytes.

Yalac - Comparisons

Reply #267
TBeck, I was curious about whether or not there is some way to send a signal to the audio player (i.e. foobar) if errors in the file are detected, so that the user could be alerted that there is a problem with the file.

I know that foobar will pop up an error console reporting some kinds of sync errors and such, but is there a way for the actual decoder to communicate with the player software so that such errors could be reported explicitly and logged?  This could be handy for weeding out bad files and such.

Yalac - Comparisons

Reply #268
Has anyone heard from TBeck lately?  I was just wondering if there have been any updates to this project.  It's been quite a while...

Yalac - Comparisons

Reply #269
Has anyone heard from TBeck lately?  I was just wondering if there have been any updates to this project.  It's been quite a while...

Finding a job can take some time.  Let's just hope his project hasn't met a void without the community being able to continue his work.

Yalac - Comparisons

Reply #270
Has anyone heard from TBeck lately?  I was just wondering if there have been any updates to this project.  It's been quite a while...


Quote
' date='Aug 28 2006, 00:04' post='424963']
Finding a job can take some time.  Let's just hope his project hasn't met a void without the community being able to continue his work.


Thanks for asking!

I am still alive!

And i have been working on Yalac. And i had to take some deceisions.

There has been no new release, because i wasn't sure, if the small improvements would justify the work of the testers. And finally i came to the conclusion, that it's now really time to stop looking for more improvements. It's possible, that i will have some new ideas sometime, but obviously i can not force them.

V0.11 will be the last version with new compression methods. Some further improvements can be achieved by optimizing the encoder parameters of the existing methods. But this has no high priority for me.

What will be new in V0.11:

- Better compression of the filter coefficients. Can give 0.1 percent better compression for preset NORMAL, a bit less for the other presets. A tiny bit but damn fast.
- New Wasted-Bits-option to remove the least significant bits of the samples, if they are all zero. Especially useful, if 16 bit samples have been converted to 24-bit by simply shifting 8 bits of zero in. To my surprise even some 16-bit files are benefiting from this option. That's strange but nice, because each wasted bit usually improves compression by up to 6 percent (for 16-Bit files).
- Presets switched back to V0.09 and somewhat modified.

I have tried far more optimizations but with little success.

What i am doing now:

- Removing code from the encoder which has only been needed for the evaluation of new compression methods.
- Cleaning up and simplifying (if possible) of the source code.
- Look for design errors.

Next things to do:

- Support for 8-Bit samples (Is anyone using them?)
- tweaking of some paramters for 24-Bit samples and other sampling rates than 44.1K.

After this has been done i will have to complete the file format (container, support for metadata).

Possibly i will publish the specification of my simple container format and ask for feedback before finalizing it.

All this will take some time, especially because it looks as if i will have far less time than before for my work on Yalac. Please don't expect a public release within the next 1 or 2 months. Sorry...

If one of the testers wants to evaluate the performance of the final encoder engine, i will release a V0.11.

  Thomas

Yalac - Comparisons

Reply #271
Quote
- New Wasted-Bits-option to remove the least significant bits of the samples, if they are all zero.


I guess this is done automatically?

Yalac - Comparisons

Reply #272
I see Josef is lingering at this moment, and he's probably the best candidate, but I am happy to run a test on the latest version, as it takes very little user time (something I have little to none of at the moment) for me to run my scripts and report the results.

While I'm on: I felt very frustrated that I got little time to test the error tollerence of 0.10, and never reported any results (everything I did test acted as expected BTW).  Did you get enough feedback from other testers?
I'm on a horse.

Yalac - Comparisons

Reply #273
Quote

- New Wasted-Bits-option to remove the least significant bits of the samples, if they are all zero.

I guess this is done automatically?

Are you talking about uncompressed audio formats? For windows wave this isn't true. I don't know, if there are more exotic formats which are doing it.

If you are talking about other lossless compressors: If i remember it right, FLAC has this option always enabled and Mpeg4Als too, if the mode -7 has been selected.

Yalac - Comparisons

Reply #274
Quote
Are you talking about uncompressed audio formats? For windows wave this isn't true. I don't know, if there are more exotic formats which are doing it.

If you are talking about other lossless compressors: If i remember it right, FLAC has this option always enabled and Mpeg4Als too, if the mode -7 has been selected.



a bit of misunderstanding here. i only wanted to know if you manually have to specify
the wasted bits or if yalac searches for them as in other compressors like mpeg4als.