Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Listening test using 2013-03-09 build (Read 22143 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Listening test using 2013-03-09 build

Reply #25
I definitely don’t disagree, and I can appreciate how useful that is for a developer. I see and agree with all your points about bit-comparison being used to determine that files are either identical or not, but that seems to be about as much as the technique can reveal, and I would like to think that this use should be easy to work out from first principles.

In contrast, as I said I was asking for examples “except from ‘this≠that’ ”, in reply to bawjaws’ comment about “comparative quality” and how bit-comparing can provide any other information.

Whether or not that was exactly what bawjaws meant, this tangent started because kabal4e attempted to comment on the performance of Opus by proffering statistics from a bit-comparator, albeit while not specifying what was compared to what – and claiming to acknowledge that such a method has no useful relation to hearing, or to the complex workings of lossy encoding, but feeling that posting it somehow remained appropriate anyway. As I’ve said before in reference to ‘I know this isn’t valid, but’–type arguments, things like that just seem like an attempt to ‘have your cake and eat it’: try to make a point that might run contrary to the rules – or just basic principles – but secure immunity from this dischord by acknowledging that it might exist… doesn’t make sense, does it?

Listening test using 2013-03-09 build

Reply #26
I definitely don’t disagree, and I can appreciate how useful that is for a developer. I see and agree with all your points about bit-comparison being used to determine that files are either identical or not, but that seems to be about as much as the technique can reveal, and I would like to think that this use should be easy to work out from first principles.


What I'm trying to say is that kabal4e's comment that most samples were bit-identical *is* useful. It tells me that the change I made to fix a corner case indeed only impacts corner cases because the majority of the time it's not triggered at all. That *is* more useful than "no audible difference". There's comparing quality and there's "let's figure out what's going on here". Let's not confuse the two.

Listening test using 2013-03-09 build

Reply #27
What I'm trying to say is that kabal4e's comment that most samples were bit-identical *is* useful. It tells me that the change I made to fix a corner case indeed only impacts corner cases because the majority of the time it's not triggered at all. That *is* more useful than "no audible difference". There's comparing quality and there's "let's figure out what's going on here". Let's not confuse the two.

Hi all,

Yes. That's what I was trying to say. Thank you Jean-Marc for translating my ESOL into something clearer ))
What happened was I tried ABX-ing one track encoded with 2013.03.12 and 2013.03.13 builds of opus-tools at 64kbps. I failed to spot the difference reliably and then didn't save the ABX log, which is not unusual for me. However, just out of curiosity I used the foobar replaygain tool for a lossless original and two encoded files; all track gains were identical and there was only a small difference between two encoded files peak values. Then I used foobar bitcompare tool, which showed the bit difference between two encoded tracks of only 25-50% and the maximum difference between samples of approx. 0.25.
To sum up, I couldn't ABX the difference, track gains identical, slight full track peak difference, up to 50% sample values exactly matching, and max difference between sample values was 0.25. That's why I said there was no difference.

The main thing is that I never said I used only bit-comparison tool for tracks comparison. Will try to attach the ABX report in the future, however, in case a person desperately wants to proove some point, what stops him from faking the ABX-log?

Listening test using 2013-03-09 build

Reply #28
I do apologise if I misread anything or underestimated the usefulness of such reports to a developer!

But here’s the inevitable however

To sum up, I couldn't ABX the difference, track gains identical, slight full track peak difference, up to 50% sample values exactly matching, and max difference between sample values was 0.25. That's why I said there was no difference.
You could, and probably should , just have stopped after the ABX test. Ranking encodings based upon the statistics output by a bit-comparator is only slightly informative at best, potentially misleading at worse. Besides, if you can’t tell the difference, does it matter how many small divergences might have been introduced by the lossy encoding process?

Quote
The main thing is that I never said I used only bit-comparison tool for tracks comparison. Will try to attach the ABX report in the future, however, in case a person desperately wants to proove some point, what stops him from faking the ABX-log?
If someone wants to cheat, s/he’ll find a way. They always do. That doesn’t mean people who want to promote proper practices should just abandon all their principles because some people might be dishonest. I could apply this to plenty of contexts in life, but then I’d be getting boring.  Anyway, for reference, there has been discussion here about possible ways – and, for all I know since I didn’t follow it, perhaps even the release of tools – to make ABX logs ‘cheat-proof’; so, whilst I don’t think the current vulnerability is any reason for the rest of us to stop promoting such testing using the presently available methods, you might find those previous posts interesting.