Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: One year of listening tests: a cross-comparison (Read 839 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

One year of listening tests: a cross-comparison

I made several blind tests and comparisons these last months. A few of them were based on more than 100 samples but most of them are based on the same reduced set of samples: 40 for music, with sometimes 20 additional samples for audiobooks & movies.
For the sake of curiosity I decided to gather all scores in one unique table. I discarded some useless columns (example: when I compared Exhale SBR and non-SBR I only kept the best setting for the same bitrate). I also removed the group of 20 samples in order to include two tests at 64 kbps and 80 kbps.
Opus at 24 and 48 kbps, and Exhale at 96 kbps were tested twice with the same samples. I kept all data in this table.




► At low bitrate FhG xHE-AAC looks extraordinary. I made a direct comparison with Opus at the same bitrates and I really prefered FhG on most samples. But with this cross-comparison, FhG xHE-AAC at 32 kbps is looking better than even Opus 48 and Exhale 48 (with SBR). While it's not totally impossible, I admit it's unlikely. I should have done this test with a mid-anchor to ensure a better accuracy and maybe prevent a possible overrating.
Opus 24 was tested twice and scored 1.74 the first time and then 1.56. Difference is not big  (± 0.18) but is significant. I think the reason is simple: Opus at 24 kbps got a better score while compared to Opus 12 kbps (which is really bad); on the second test Opus 24 kbps was explicitly used as low anchor — hence the lower score.

► Opus 48 kbps was also tested twice: 2.88 and 2.79 (± 0.09). Same for Exhale at 96 kbps: 4.48 and 4.39 (± 0.09). My scorings are consistent — and that makes me happy :-)

► At the top of the graph, LC-AAC (Apple) at 128 kbps (CBR) is a bit alone: Exhale at 96 kbps is the only one that is statistically tied.



Source:

HE-AAC 48 (Apple): https://hydrogenaud.io/index.php?topic=120081.0

LC-AAC 128 (CBR Apple): https://hydrogenaud.io/index.php?topic=120081.0

Opus 12: https://hydrogenaud.io/index.php?topic=120997.0
Opus 24¹: https://hydrogenaud.io/index.php?topic=120997.0
Opus 24²: https://hydrogenaud.io/index.php?topic=121099.msg998706;topicseen#new
Opus 32: https://hydrogenaud.io/index.php?topic=120997.0
Opus 48¹: https://hydrogenaud.io/index.php?topic=120081.0
Opus 48²: https://hydrogenaud.io/index.php?topic=121099.msg998706;topicseen#new
Opus 96: https://hydrogenaud.io/index.php?topic=121099.msg998706;topicseen#new

xHE-AAC Exhale 48: https://hydrogenaud.io/index.php?topic=118888.msg997431#msg997431
xHE-AAC Exhale 64: https://hydrogenaud.io/index.php?topic=118888.msg997431#msg997431
xHE-AAC Exhale 80: https://hydrogenaud.io/index.php?topic=118888.msg997434#msg997434
xHE-AAC Exhale 96¹: https://hydrogenaud.io/index.php?topic=120997.0
xHE-AAC Exhale 96²: https://hydrogenaud.io/index.php?topic=121099.msg998706;topicseen#new

xHE-AAC FhG 12: https://hydrogenaud.io/index.php?topic=120997.0
xHE-AAC FhG 24: https://hydrogenaud.io/index.php?topic=120997.0
xHE-AAC FhG 32: https://hydrogenaud.io/index.php?topic=120997.0
xHE-AAC FhG 96: https://hydrogenaud.io/index.php?topic=121099.msg998706;topicseen#new

Re: One year of listening tests: a cross-comparison

Reply #1
And to finish with graphs, here is a cross-comparison of all tested bitrate with OPUS 1.31 (60 samples: music AND speech)…




…followed by a cross-comparison of tests including Exhale xHE-AAC (40 samples: music only):




Scores and ranking are really consistent from one test to another :)

Cheers!

Re: One year of listening tests: a cross-comparison

Reply #2
I am happy to see you perform many many meticulous listening tests, to the point some detailed meta-analysis like this is possible.

You can use a "%feature 8 " to put braces below the chart.

Code: [Select]
opus 12	fhg 12	opus 24¹	opus 24²	fhg 24	opus 32	fhg 32	opus 48‘	opus 48²	he-aac 48	exhale 48	exhale 64	exhale 80	opus 96	exhale 96¹	exhale 96²	fhg 96	aac cbr128
%feature 8 12kbps 12kbps 24kbps 24kbps 24kbps 32kbps 32kbps 48kbps 48kbps 48kbps 48kbps 64kbps 80kbps 96kbps 96kbps 96kbps 96kbps 128kbps



Hope it will help.

Sadly my graphmaker 6 still cannot hold any N/A (not tested on this encoder on that sample), nor support any multilevel analysis besides "By Genre", and even this rarely aid visual discovery.
I should have used a relational model internally, but by technological mistake I made ten years ago, I employed a simple two-dimensional array.
Maybe it's time to bring a major update to my graphmaker. It will take many months.

Re: One year of listening tests: a cross-comparison

Reply #3
Thanks for the tip! Very easy indeed, and much more eye-friendly :D

Re: One year of listening tests: a cross-comparison

Reply #4
@guruboolez
Excellent comparison! :)
I see that Exhale @96k is pretty much high quality performance with only one sample at 3.8 quality. Otherwise it's annoyance free (above 4.0).
Probably enough to use regularly as high quality portable option. Impressive! :)
WavPack v5.4 -b450hh

 
SimplePortal 1.0.0 RC1 © 2008-2021