Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: lossyWAV 1.2.0 Development Thread (Read 310840 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

lossyWAV 1.2.0 Development Thread

Reply #175
I don't think there should be any vertical lines.

The way I assume you've coded it, I bet when you transition from a block with some bits to remove, into a block with zero bits to remove, I think you'll still get a vertical line (click) - unless you carry the noise shaping on into that block. You could try that - it should hopefully have zero effect after the first four samples, but you'd need to check.
Will attempt - may take some time.
In the part shown, there are a few vertical lines even when transition between blocks which both have some bits removed. I'm not sure why that would happen.
Probably due to me scaling before noise-shaping - I'll work on that.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #176
You are a perfectionist!

(I view that as a good thing).

Cheers,
David.

lossyWAV 1.2.0 Development Thread

Reply #177
It's been put to me that it's more of a form of Obsessive Compulsive Disorder.... Probably why I'm an Engineer by profession.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #178
lossyWAV beta 1.1.2c attached to post #1 in this thread.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #179
Thanks for testing lossyWAV 1.1.2b. Now things look pretty fine again. Did you use just -S resp. --standard with noise shaping as defaulted (no -s or --shaping option)?

BTW it seems that when describing how to ABX with foobar I wasn't precise enough in one point: You decide before testing how many trials you do, for instance 8 (minimum) or 10 or 16. After making up your mind the number of trials is fixed. In case you fail ABXing with the test you are free of course to do another ABX test, but when telling about your ABX results you should mention these failed ABX tests.

So far it looks like the noise shaping 'clicks' have caused the issue. Thanks, 2Bdecided, for showing up this problem, and, Nick.C, for immediate fixing.


Halb27 - thanks again.
I used --standard --shaping 0.
Both runs lossy file is Lossywav 1.1.2b
Im concerned if I am slightly not on the same page on this.

Yes, maybe we should pre determine an agreed upon fixed trials per testing session,
& the weight & need of a 2nd or more session, and 'rules' for them.

although that should provide some balance - deciding what that balance IS, is not that easy.

I just realized that even interpreting these results can go different ways, & needs attention, so here is my view on it:

As you see fom the SECOND run log,even the 14 or so first launches ,that were pretty bad on my part, which were obviously the result of fatigue from a prior 50(!) run,
did not influence the global result , even though they have Halved my 'score' on the 2nd run, still

45/80  15.7% [Pure 80 run] Vs.  50/94 (30.3%)

(This is my fault for being new to foobar , and not realizing that RESET is NOT a new testing session, but did include the failed result in the outcome)


So Now comes the part of how you interpret the results.
I might be totally wrong in my interpretation ,so please feel free to correct me.

As I told you , AB tests are not new to me, only Foobar tests.
So Ill explain my view of it, from my experience , just to be clear & make sure we are all on the same page here:

The Sole Reason for me to up the trials per session from 20 , to 50 , to 80,is to have Fatigue & Focus play a bigger role, & indeed they do.

Also the logs show it better then I can explain in words.

From real hardware shootouts that I used to take part in ,
Keeping a positive score will be harder and harder when fatigue,lost of focus, etc, sets in.

So we have to factor testing 'sessions' , as a time period when you truly can evaluate.

The 1st session posted is pretty clear , Im quite in sync with things , & focused enough to tell these apart.

Total: 31/50 (5.9%)

But actually this is not the important result , the 2nd run is the important one:

45/80  15.7% OR 50/94 (30.3%)

On the surface ,These Numbers show a lower score.
In reality these are equal or better as they still show a positive identification after a prior ~25 minutes 50 run test , that should be enough to cause a serious dent in a tester's ability to differentiate between the 2 with any success at all.

So a 2nd session should have at least a 80% failure rate to begin threatening the validity of the first session.

This is obviously not the case here.
The 2nd Run only says that the result of the 1st run , is indeed valid.

Now about doing a 2nd session at all:

This is NOT a good idea in general as the odds for you to detect a difference at all,
have already shrunk considerably , because the 2nd session is conducted 10 or less minutes apart.

on top of that I have almost doubled the tests per session on the 2nd session as you can see.

(If anyone has a different interpretation of the results I'd like to hear it & learn,
as Statistics is not my field.)

Quote
botface Posted Today, 06:03
Bork,
In an earlier post you mentioned "reduced overtones" and "midrange thickness" as well as noise. Does your latest listening test and ABX results indicate that these problems have gone away too?


although I tend to assume that the extra distortion in the highs was not that obvious (to me) this time, so either the --shaping 0 &/or the corrections made to loassywav were the reason.

the "reduced overtones" and reduced "midrange thickness" & weight are still here.

lossyWAV 1.2.0 Development Thread

Reply #180
You know, when you add those two results together, you get p<0.078...

lossyWAV 1.2.0 Development Thread

Reply #181
You know, when you add those two results together, you get p<0.078...


I dont understand what that means , so Ill have to take your word for it 

lossyWAV 1.2.0 Development Thread

Reply #182
lossyWAV beta 1.1.2d attached to post #1 in this thread.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #183
... The Sole Reason for me to up the trials per session from 20 , to 50 , to 80,is to have Fatigue & Focus play a bigger role, & indeed they do. ...

Because of these things and the extreme pain of such an ABX test I think it should be left to the tester to a large extent how to conduct the test.
But even with this in mind I think it's not totally correct the way you do it. Your last tests look a bit like continuing the test so far until you have reached a rather long sequence of more or less uninterrupted hits providing a rather low percentage of probabability for testing. While this still gives information that you can hear a difference (and may be enough for our needs here - but I'm not sure) it is not totally convincing. The longer your test the more likely is that such a sequence of hits happens by chance. You shouldn't try and try just until a guessing probability is reached you consider low enough. Using this as a test stopping criterion is a method for giving your test results a positive bias.

I'd feel more comfortable if you would make sure good listening conditions for you in another way, for instance doing a long warm up listening to just A and B before starting the real X/Y gueeses. And/or - as you seem to feel more comfortable with long test sessions - decide for a long test sequence of say 40 trials or whatever you like best. Use whatever listening condition is best for you. But the number of trials should be fixed.

Other than that thanks again for testing. We now know you didn't use noise shaping to get at your last results. And this brings at least better results than the noise shaping of 1.1.2.
We don't know however whether Nick.C's noise shaping fix improves the situation.
I know it's hard but can you please test your samples using Nick.C's current version (EDITED:) 1.1.2d and leave noise shaping as defaulted?
lame3995o -Q1.7 --lowpass 17

lossyWAV 1.2.0 Development Thread

Reply #184
I know it's hard but can you please test your samples using Nick.C's current version 1.1.2c and leave noise shaping as defaulted?
v1.1.2d, please.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #185
Wow, you're so fast. Sorry I missed your post.
lame3995o -Q1.7 --lowpass 17

lossyWAV 1.2.0 Development Thread

Reply #186
Halb27 , no need to ask me twice to give up on the longer sessions
I doubt they can become a 'standard' anyway ,& now that you mention it,
do think they are a bad idea anyway (.. my reasons are the opposite though , I think they tend to have a negative bias ,& too painful but nm)

Maybe a range of 20 minimum to 40 maximum run per session should be the agreed upon standard ? , you tell me.

Will try the new v1.1.2d version (Thanks Nick !) with default Noise --standard.

lossyWAV 1.2.0 Development Thread

Reply #187
Maybe a range of 20 minimum to 40 maximum run per session should be the agreed upon standard ? , you tell me.

You can choose whatever number of trials is most appropriate to you (except for the minimum of 8 trials there is no other restiction), but please choose a discrete number in advance and go through exactly this number of trials.
Will try the new v1.1.2d version (Thanks Nick !) with default Noise --standard.

Thanks a lot. You really bring things forward.
lame3995o -Q1.7 --lowpass 17

lossyWAV 1.2.0 Development Thread

Reply #188
IIRC, it's also valid to instruct foobar to hide ABX results until finished. So each time you make a choice on a trial, you don't know whether it was correct until you tell it you are done with the whole session. This way, you can end the session whenever you like (after 8 of course) and are not suspected of stopping only after a satisfactory p value is achieved.

lossyWAV 1.2.0 Development Thread

Reply #189
I have made simple experiment:
Has compressed a file pcm.wav (the size 60Mb) using wavpack (switch -b200) and has received compressed lossy file in the size 8,5Mb. Further I have unpacked the compressed file, having received unpacked.wav and again have compressed it using wavpack with lossless compression. The received file has occupied 40Mb! It that for compression both time the same compressor was used and it is absolute the same not compressed information contained in both files! Answer please as it is possible is easier:
1. Whence there is such difference in the sizes of the compressed files.
2. As I understand, in lossyWAV the second variant of compression, i.e. lossless compression of preprocessed wav file is always used? Whether probably to create as much as possible optimized and integrated compressor which will provide compression directly to lossy compressed file with so high quality as now and thus much higher compression ratio (as at compression with wavpack lossy).

In my opinion for popularization of lossyWAV it is absolutely necessary to make the elementary GUI and place it on the same page of wikipedia as lossyWAV for downloading! In GUI it is possible to create following options:
make *.lwc (Lossy Wav Compressed)
make *.lws (Lossy Wav Stored)  and chekbox "preserve *.lossy.wav. extension for lossyWAV stored file"

lossyWAV 1.2.0 Development Thread

Reply #190
.. As I understand, in lossyWAV the second variant of compression, i.e. lossless compression of preprocessed wav file is always used? ...

Yes.
... and thus much higher compression ratio (as at compression with wavpack lossy). ...

At the high efficiency end (~320 kbps and below) wavPack lossy is expected to provide the better quality.
A disadvantage of lossyWAV is that the lossless codec has to use short blocks of 512 samples which has a negative impact on the lossless codecs' efficiency.
At very high quality settings (~450 kbps and above) lossyWAV currently provides the better theoretical justification for quality though this does not necessarily mean that qualty is really better. In fact at such a high bitrate wavPack lossy provides excellent results as well.
A real advantage of lossyWAV over wavPack lossy is that it's safe for the future in the sense that the final lossless codec's results can be losslessly transcoded later to any other lossless codec.
lame3995o -Q1.7 --lowpass 17

lossyWAV 1.2.0 Development Thread

Reply #191
Nick,

1.1.2d solves the "clicks" completely.

Unfortunately, running the noise shaping across the zero bits_to_remove blocks doesn't work properly. It's not a bug in the code - the problem is that the noise shaping gets stuck in limit cycles, where the output values get stuck in a loop for a given set of input values, rather than decaying to zero. This means that the difference between the original signal and lossy version is a 1-LSB 22.05kHz tone (for example).

There are a few solutions. The easiest is to skip the zero bits_to_remove blocks; the downside is a "click" at the end of the previous block. The best solution I can think of is gradually transitioning away from the noise shaping during the first 10 or so samples of the zero block - either by crossign fading the coefficients (or the output), to zero. Re-set and re-start it at the next non-zero block.

I still think none of this has anything to do with the ABX results posted here.

Cheers,
David.

lossyWAV 1.2.0 Development Thread

Reply #192
Yay!

Hmmm.... This would presumably also happen if a block had some signal followed by digital silence?

Would it still happen if --shaping 0.9 (for example) is used as this may(?) gradually attenuate the noise-shaping related differences?

Ah - problem - the FIFO values are not in the range -1..1 - they will be in the range -(2^bits_to_remove)/2..+(2^bits_to_remove)/2-1. This could cause strange effects when a block with a large bits_to_remove value is immediately followed by a block with zero bits_to_remove!
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #193
lossyWAV beta 1.1.2e attached to post #1 in this thread.
lossyWAV -q X -a 4 -s h -A --feedback 2 --limit 15848 --scale 0.5 | FLAC -5 -e -p -b 512 -P=4096 -S- (having set foobar to output 24-bit PCM; scaling by 0.5 gives the ANS headroom to work)

lossyWAV 1.2.0 Development Thread

Reply #194
halb27, thanks for the answer! But you say only that lossyWAV always uses blocksize=512Kb (I understand it). But my main question, is why the sizes of two files, compressed by wavpack so much differs:
1. Wavpack lossy, received from the original (pcm.wav - rip from CD-DA)
2. Wavpack lossless, received by compression of wav file, which has been unpacked from wavpack lossy.

Matter here not only in blocksize! I was just convinced this:
1.   First I have taken the same pcm.wav (60Mb), and have compressed it in wavpack with desirable bitrate 200kbps, but this time have specified fixed blocksize for all file. For this purpose I used the following command line:
wavpack -b200  --blocksize=512  pcm.wav  bs=512.wv
The real bitrate has thus turned out 262kbps, and the size of a file 11Mb
2.   Then I unpacked bs=512.wv into bs=512unpack.wav
3.   After that I compress a file bs=512unpack.wav by use of lossless wavpack, again having specified the fixed size of the block:
wavpack - blocksize=512  bs=512unpack.wav  bs=512_lossless.wv
This time the size of a file has made 46Mb, that in both cases blocksize=512. Why it four times larger?

If Wavpack with fixed blocksize=512Kb makes so essential difference of file size depending on that, lossless or lossy compression is used, than and lossyWAV if it could at once directly not only process, but also to compress a file, then for the first time at absolute the same quality of a file (bit identical, if unpack to *.wav) we could receive the compressed file in the size 4 times less, than now (when we use wavpack, flac, tak, etc. with lossyWAV)! Thus nobody prevents to unpack in the future this compressed file without any loss of quality and then again to pack it by any other new lossless compressor, compatible with lossyWAV.

lossyWAV 1.2.0 Development Thread

Reply #195
halb27, thanks for the answer! But you say only that lossyWAV always uses blocksize=512Kb (I understand it)...

512 wave samples, not 512 kb.
... I used the following command line:
wavpack -b200  --blocksize=512  pcm.wav  bs=512.wv ...

wavPack works inefficiently with a blocksize of 512 samples. You should let wavPack decide on the block size.
... But my main question, is why the sizes of two files, compressed by wavpack so much differs:
1. Wavpack lossy, received from the original (pcm.wav - rip from CD-DA)
2. Wavpack lossless, received by compression of wav file, which has been unpacked from wavpack lossy. ...

Lossless codecs work like this: They make a prediction on the next sample based on the previous samples. The prediction follows a fixed scheme and so doesn't need any information to be stored (except for blockwise prediction parameters depending on codec). The prediction error usually is small and is stored exactly in an efficient way. With tle lossy variant of wavPack the prediction error is not stored exactly but is only approximated, with your given bitrate as a control parameter for the size of the prediction error allowed. When you pass your wavPack lossy result to wavPack lossless, wavPack has to store the exact prediction error of your wavPack lossy source, and there is no principle that makes this a more efficient prcedure than storing the prediction error of the original source.
wavPack lossy + wavPack lossless is a useless procedure which is totally different from lossyWAV + a lossless codec which makes efficient use of a reduced number of most significant bits per block.
lame3995o -Q1.7 --lowpass 17

lossyWAV 1.2.0 Development Thread

Reply #196
@Nickc.......

It seems that one or two people have recently discovered Lossywav and raised questions. There's also been a bit of development activity lately. That has refreshed a thought I had some time ago but never got round to raising.

Basically I am a big fan of lossywav and would like to see it more widely used. However, in the thread it is describes itself as "Added noise WAV bitdepth reduction method". Now, I know this is a true and accurate description but I'm sure that if I came accross Lossywav now and saw it described thus I would think it would be of no interest to me. Why would I possibly want to add noise to my music?.

I was wondering if changing the desription to something like "Dynamic WAV bitdepth reduction method" or "Variable WAV bitdepth reduction method" might encourage some people to delve a little bit deeper and hopefully find that Lossywav is a good solution for them rather than perhaps being put off by the not very flattering current description. I don't think descriptions like the ones I'm suggesting are inaccuarte or misleading they just don't point out the only negative aspect of Lossywav. In any case I've never heard any additional noise from Lossywav at --portable or above so why mention it?

lossyWAV 1.2.0 Development Thread

Reply #197
Hi Nick,

You're so quick!

There are still a few limit cycles. Some have gone (I guess they were caused by internal values hanging around that were too big?), but some remain.

Like I said, I don't think it's a bug - it's what happens when you put low levels through high order recursive filters.

Cheers,
David.

lossyWAV 1.2.0 Development Thread

Reply #198
@botface

A reasonable description is "near lossless audio coding".

Of course it begs the reasonable question "what is near lossless", or even the comment "it's either lossless or lossy - you can't have near lossless" - that's fine. It's a reasonable thing to ask/say, and the answer is also reasonable:

lossless (e.g. FLAC, ZIP) = mathematically lossless
lossy (e.g. mp3) = hopefully sounds similar / the same but might not stand further processing, nowhere near mathematically lossless
near lossless (e.g. lossyWAV) = hopefully sounds the same and stands further processing, is as near to mathematically lossless as the algorithms thinks is necessary (e.g. very quiet sections = mathematically lossless, louder/complex sections = nowhere near mathematically lossless)

Cheers,
David.

lossyWAV 1.2.0 Development Thread

Reply #199
@BORK,

Do you have time to try -I --shaping 0?

Cheers,
David.