Dire Straits - Brothers in Arms 584,178,044 bytes duration 55:11========================================================================name/params Ratio EncTime/CPU% DecTime/CPU--------------------- ------ ------------ ------------Yalacc 0.06 -p0 46.15% 63.92x / 62% 79.90x / 46%Yalacc 0.06 -p0 -c1 46.15% 67.31x / 66% 84.69x / 48%Yalacc 0.06 -p1 45.70% 33.38x / 95% 84.46x / 55%Yalacc 0.06 -p1 -c1 45.70% 33.60x / 95% 83.59x / 55%Yalacc 0.06 -p2 45.41% 11.45x / 99% 82.29x / 61%Yalacc 0.06 -p2 -c1 45.41% 11.84% / 99% 84.19x / 60%Yalacc 0.06 -p3 45.34% 4.37x / 99% 82.61x / 59%Yalacc 0.06 -p3 -c1 45.34% 4.47x / 99% 81.72x / 60%
@Synthetic Soul: I may simplify this table to make it more readable. Perhaps the kernal and user percentages are unnecessary for this kind of testing?
I was intending to just look at Process (CPU only) and Global (CPU + IO) values.
I found using YALACC with the -c1 switch had an impact on encoding for all profiles, but most interesting was using the -c1 switch on the command-line for decoding usually had a positive effect. I'm not sure if using -c1 for decoding is valid but I re-ran the decoding process twice and the differences are measurable.
Code: [Select]Yalacc 0.06 -p0 46.15% 63.92x / 62% 79.90x / 46%Yalacc 0.06 -p0 -c1 46.15% 67.31x / 66% 84.69x / 48%
Yalacc 0.06 -p0 46.15% 63.92x / 62% 79.90x / 46%Yalacc 0.06 -p0 -c1 46.15% 67.31x / 66% 84.69x / 48%
rw Fast Normal High ExtraCompression 0,12 0,20 0,12 0,17 %Encode -6,59 -1,96 20,77 -27,65 %Decode -0,60 -6,68 -13,70 -10,25 % songs Fast Normal High ExtraCompression 0,15 0,24 0,15 0,22 %Encode -3,00 -1,91 25,54 -29,06 %Decode -2,54 -7,35 -12,63 -11,44 %
' date='Jun 5 2006, 04:28' post='399563']I would change the way the coefficients are stored only for fast and normal. For high, extra and insane, I would activate the prefilter.
' date='Jun 5 2006, 04:50' post='399571']Ah! I didn't think about the more complex code issue; in that case, maybe you could forget the prefilter altogether? Or put it only in extra and insane, yes. Or, in any case, make it toggleable in the other modes (if ever you need to restrict certain options..)I'm sure you will find the right solution -- you seem to be so dedicated and well-thought-through.
Edit: Just learned, that there is a difference between "lightens" and "enlightens"...
I've written numerous responses to this now Thomas, but keep contradicting myself and going around in circles.
Absolute values for test set rw Enco-Rate Fast Normal High Extra V0.07 37,75 15,74 6,45 3,95 V0.08a 37,93 15,63 6,01 2,69 V0.09 37,76 15,41 7,87 3,94 Deco-Rate Fast Normal High Extra V0.07 68,61 58,37 51,81 53,67 V0.08a 69,46 57,83 43,79 46,63 V0.09 67,62 53,47 45,43 41,10 Compression Fast Normal High Extra V0.07 57,31 56,71 56,36 56,27 V0.08a 57,31 56,71 56,20 56,10 V0.09 57,19 56,51 56,24 56,02 Comparisons for test set rw (in percent) V0.08 vs V0.07 Fast Normal High Extra Compression 0,00 0,00 0,16 0,17 Encode 0,48 -0,70 -6,82 -31,90 Decode 1,24 -0,93 -15,48 -13,12 V0.09 vs V0.07 Fast Normal High Extra Compression 0,12 0,20 0,12 0,25 Encode 0,03 -2,10 22,02 -0,25 Decode -1,44 -8,39 -12,31 -23,42 V0.09 vs V0.08 Fast Normal High Extra Compression 0,12 0,20 -0,04 0,08 Encode -0,45 -1,40 28,84 31,65 Decode -2,65 -7,54 3,75 -11,86
' date='Jun 5 2006, 21:09' post='399818']Somehow, I feel the speed loss is great compared to the compression gain; efficiency might not be an issue, here.
' date='Jun 5 2006, 21:09' post='399818']Could you also post results with IO speeds taken in account? (antivirus disabled, if you have one)I think IO results will be necessary before we are able to judge the validity of the modifications and decide whether they are worth it or not...
- The most important disadvantage for V0.09 may be the 8.39 percent penality for decoding on NORMAL.
Now, returning to your previous discussion...
I would be very interested to know what you could do, if given license to drop Normal's encoding speed by 0.2-0.3%, and Fast's by 0.5-0.7%. I don't know whether you have enough variables to achieve this, i.e.: whether past improvements have been code improvements that simply can't be undone for speed gains. If you did have some switches that you could simply turn off to favour speed over compression I would be interested to hear your thoughts on the possibilities though.
My main concerns at the moment are Fast and Normal - especially Normal, considering that it should strive to be the best balance of speed and compression that Yalac can offer, IM(H)O .
...I guess all I can do is reiterate that I would gladly see Normal and Fast lose some compression in order to speed up encoding and decoding (well, depending on the benefit gained. )....It seems moving to parcor has too many benefits to ignore, but it is possible that you could now make other changes that would create a better spread between Fast and Insane....I would be very interested to know what you could do, if given license to drop Normal's compression ratio by 0.2-0.3%, and Fast's by 0.5-0.7%....If you did have some switches that you could simply turn off to favour speed over compression I would be interested to hear your thoughts on the possibilities though.
NB: Thomas, if you would like any of this split to a new thread just let me know. This thread does seem to sway off-topic, and some of this is more relevant to the thread "Yalac – Evaluation and optimization", which is sadly neglected... (poor thing).
It would make my code far more complex, if i would use both representations of the predictor coefficients.
In this context i am not too happy, that Yalac's NORMAL often compresses a bit worse than Monkey's NORMAL. Therefore i would like the 0.20 percent improvement provided by the parcor coefficients. Possibly this is the most important (psychological) advantage of the parcor coefficients for me...
There is not much that can be done to speed up FAST (at least not without big changes of my code): A decrease of the predictor order from 32 to 8 reduces compression by about 0.80 percent and provides about 25 percent faster encoding.
But a quick check for NORMAL looks more promising: Setting partition serch level from normal to fast reduces compression by about 0.06 percent and provides about 25 percent faster encoding. This looks promising. But don't forget my statements above: I would prefer NORMAL to compress a bit better! Ok, one more dceision...Another option would be the reduction of the maximum predictor order from 128 to 96. V0.09 will give access to 8, 96 and 192 predictors. Probably HIGH will use only 192 predictors in the future.
I thought about it too. But is it really practicable? We need the comparison results for our discussion.Another approach: We open another thread, where only the comparisons (of any or only of the latest version) are beeing posted.
Thanks for your time and patience Thomas.
Quote from: TBeck on 04 June, 2006, 10:42:22 PMIt would make my code far more complex, if i would use both representations of the predictor coefficients.Could you use a representation compatible with both normal and parcor coefficients?