Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: New Listening Test (Read 107080 times) previous topic - next topic
0 Members and 2 Guests are viewing this topic.

New Listening Test

Reply #175
Did you guys read point 6 of the EULA?

Quote
You may not disclose the results of any benchmark tests of the software to any third party without Microsoft’s prior written approval


Wait a minute, does this mean no more listening tests with WM codecs?

This wasn't in the WMP10 EULA, was it?  Does an ABC/HR test count as a "benchmark test"?

New Listening Test

Reply #176
Did this version of wma 10 use SBR+PS because sound's very good and suppored by and hardware player.

WMA10Pro+, which is usable in WMP11 ONLY and not in WME and 3rd-party apps, does use it (or something close, if not necessarily it, but judging by the SBR+PS artifacts and the fact that its 32kbps 44.1khz stereo files play back as 22khz mono on my Pocket PC, yeah).
WMA10Pro standard, available in WME and 3rd-pary apps and which is compatible with previous hardware devices and WMA9Pro decoder, doesn't use SBR+PS-type technology as far as I can tell, but simply uses "other" high-end preservation tools which don't act quite like SBR+PS and retain compatibility with previous decoders.
Copy Restriction, Annulment, & Protection = C.R.A.P. -Supacon

New Listening Test

Reply #177
Wait a minute, does this mean no more listening tests with WM codecs?


No, it means no listening test with WMP 11 Beta or Vista Beta. I guess the reason is that the software is still beta and MS doesn't want to get a bad image if something doesn't work right or in a suboptimal way in these versions.

New Listening Test

Reply #178
Did you guys read point 6 of the EULA?
...
You may not

* disclose the results of any benchmark tests of the software to any third party without Microsoft’s prior written approval;

Then results of benchmark have to be released by someone who did not accepted WMP11 EULA.

New Listening Test

Reply #179
It says "to any third party". So the person doing the benchmark should not have accepted the WMP11 EULA either. (IOW - someone else has to encode the WMA files and then hand them over).

Although, I don't agree with using beta software in a listening test if the developers do not give their consent.
"We cannot win against obsession. They care, we don't. They win."

New Listening Test

Reply #180
Thing is,

Some people on this forum that, I believe were sounding like "insiders" or maybe even more than that  recommended waiting for this very codec.  So, it would be very unfortunate if Microsoft does not grant Sebastian a right to test the WMA10Pro+ codec.

It would be a very valuable codec for the 2006 low-bitrate codec shootout.

New Listening Test

Reply #181
I agree it would be unfortunate if WMA could not be tested. But I would find it even more unfortunate if a codec was tested that had an apparent bug in it. It would be bad for both sides, as it is unneccesary bad advertising for the codec (ok, if it's not good, it's not good, but each codec should at least have a fair chance), reduces the value of the test and would be a waste of time.
"We cannot win against obsession. They care, we don't. They win."

New Listening Test

Reply #182
I remember Woodinville asking for waiting for WMA Vista Codec for the Sebastian's  128 kbps test - dunno if that has any weight but for me it sounded that the codec seems to be mature enough

Also, Microsoft anyway did a PR where the quoted independent testing lab results of quite competetive quality of the WMA10+ codec.

It still does not mean, of course, that the codec is finished - but for me personally it sounded like it really was, if not completely finished, then at least already reached the state of a good stability

New Listening Test

Reply #183
Sorry to ask this but what do you mean by PR ?


@Sebastian : have a look at this...

New Listening Test

Reply #184
Public Relations  MS issued marketing material claiming the independent test lab found new WMA codec to be better than HE-AAC.

I think this was discussed here at HA.

New Listening Test

Reply #185
Thanks kurtnoise, Ivan already told me about that thread. I lost my Doom9 password (I always use 32 random character PWs) and the bloody forum doesn't want to send me the reset mail.

Anyways, I am not sure how much of help it would be to have the e-mail address of those right people, since I already mailed the WMA and licensing division and they told me that they are discussing the issue internally. Didn't receive an update ever since. So, I am afraid if I write another e-mail, I might get the same reply and then that's it.

New Listening Test

Reply #186
download new Windows Media Player 11 and install then Rip from CD to WMAPro at 64 kb/s - sound of new codec Windows Media Audio 10 Professional at 64 Kb/s it's GREAT!


New Listening Test

Reply #188
You might find that line stays in the EULA after release of WMP 11 full, ie if you have done a test that MS does not agree with then you cannot publish.

Even this beta version, I doubt such a post is illegal, yes if the user had signed an NDA it might hold true, but not a click through EULA (of a public beta) - the majority of items in a EULA if they take rights away which are normal for a country will not be enforcable.

New Listening Test

Reply #189
The question from my side would then be if "hosting" the result of a listening test which is a violation of that license, is in itself illegal (I for sure never agreed to the EULA on WMP11). Any pointers or precedents? I know Oracle pulls the same sh*t with their database.

The thing is that I know that:So my desire to stick out my neck for this is entirely nonexistant.

New Listening Test

Reply #190
Hosting test results can not be a violation of a license that you did not accepted, IMO.
To me the workaround is simple: the person conducing the test should not accept this EULA, and so should ask someone else to provide the wma encoded/decoded samples.

New Listening Test

Reply #191
Well, Ivan managed to get in touch with someone on D9 who is trying to help out now by talking to some MS people who are responsible for this. Anyways, I will include WMA in the test and then hope to get a permission to publish the results. If not, shit happens.

I would really like to conduct the test myself, but I agreed to the EULA already. Also, even if someone else would conduct and gathers the encoded samples from someone else, what if that person who did the encoding screwed up something? I really don't think it's a good idea if two or more people do the preparation job.


 

New Listening Test

Reply #193
Also to be true MUSHRA, there must be at least one anchor, and the recommendation for that first anchor is 3.5 kHz lowpassed.  After all, the procedure is termed MUlti Stimulus test with Hidden Reference and Anchors.  I think this part is probably more important than the 100-interval scale or the labels or maybe even eliminating the side-by-side reference for each codec, because the anchors are what standardizes the ratings.

I wonder if a current 48 kHz test isn't becoming too good for the original intent of MUSHRA?

ff123
My original intent was to get a better scale than the current one. I already explained why I dislike the usual 1...5 one (in short: the scale is too small and the corresponding description isn't well balanced for my own taste). In that perspective MUSHRA was interesting but I didn't mean that we have to religiously follow all MUSHRA recommendations. Anyway, previous experiences are proving that MUSHRA can't pertinently be used « as it » with listening tests involving HE-AAC. A 3.5 KHz lowpass as obligatory low anchor with contenders like HE-AAC (where lowpass is often = or > 16 Khz) is close to be nonsensical. And the high-anchors (7.5 lowpassed) were also rated as significantly inferior to HE-AAC encoding; it defeats the purpose of high anchors!
Exemples:
http://www.ebu.ch/en/technical/trev/trev_283-kozamernik.pdf
http://www.telos-systems.com/techtalk/host..._AAC_paper).pdf
and also one previous test from Roberto using the same anchors


My idea would be to think about and finally decide about a new, well-balanced and universal scale for the upcoming listening tests rather than follow existing recommendations.
A quick conversion should explain why the current one (1-5) is a problem. If you convert it to a 100% scale, it will look exactly like this:

Code: [Select]
			   CURRENT SCALE	100% SCALE

transparent 5.0   100%
perceptible but not annoying 4.0   75%
slightly annoying 3.0   50%
annoying 2.0   25%
very annoying 1.0     0%
75% for something that is not annoying?! A med-iocre [50%: mid of the scale] ranking for something that is just slightly annoying? Does it sound balanced to you?

1/ I would personaly say that « perceptible but not annoying » is a state, not a level, and  that it barely accepts nuance. And for harsh, demanding or intolerent people, the sole existence of such state may even be nonsensical: every audible difference starts to be annoying (this point of vue is defendible).
In my own perception, the « perceptible but not annoying » state would correspond to very subtle distortions I can hear with high lossy encodings (160 kbps or more) during specific conditions (like meticulous ABX trials) and should be at 95% of a 100% scale. Maybe 90% for rounding purpose, but certainly not 75% like in our current 1-5 scale.

2/ The « slightly annoying » state would correspond to artefacts or distortions I can hear without too much effort but that don't make my hackles rise. To give a concrete example it may correspond to LAME -V5 encodings: they're not completely transparent; I don't need two hours to succesfully ABX it; but quality is pretty good despite the few audible differences I can perceive. On a 100% scale it would correspond to 75...80% (with some nuances: 85% for distortions that are near non-irritating; 70% for artefacts that are just slightly more than slightly annoying.
Would it be a scandal if collective listening tests would end with LAME VBR 130 kbps at 80...85% of a 100 full quality? I don't think so.

=> my own vision of the top of a balanced scale would shift our current scale from one step (4.0 = slightly annoying and 4.8 = perceptible but not annoying).

If everybody agrees with it, we could follow:
100% = transparent /unABXable encoding
90% = perceptible but not annoying
80% = slightly annoying
70% = slightly annoying

60% = annoying but decent
50% = annoying but decent

40% = unpleasant
30% = very unpleasant
20% = bad/very bad
10% = very bad
0% = chaos, total mess

or something like that (someone with a better english lexical knowledge may refine it).

That way, I could imagine as possible something like that:
MP3 ~192 kbps = 95...100%
MP3 ~160 kbps = 90%
MP3 ~130 kbps = 80%
OGG ~100 kbps =70%
OGG ~80 kbps = 60%
HE-AAC 64 kbps = 50-55%
HE-AAC 32 kbps = 40-45%
WMA 64 kbps = 25-30%
etc...

It's of course my own vision (or imagination). But it gives me a coherent position and coherent distance among different coding solutions at different bitrate - something I can't get with the current scale (HE-AAC would obtain 2.0 "annoying" but then I don't have room enough to coherently rate a low anchor (3.5Khz) and WMA 48 kbps: the low anchor should get 0 but the minimum is 1/5).
Wavpack Hybrid: one encoder for all scenarios
WavPack -c4.5hx6 (44100Hz & 48000Hz) ≈ 390 kbps + correction file
WavPack -c4hx6 (96000Hz) ≈ 768 kbps + correction file
WavPack -h (SACD & DSD) ≈ 2400 kbps at 2.8224 MHz


New Listening Test

Reply #195
Definitely classic samples  not modern.

Mmm, doesn't Vorbis handle classical samples better than the other codecs? I have an impression that Vorbis tends to be very effective in encoding clear tones, but fail to handle noise correctly (at least at <q5).
Infrasonic Quartet + Sennheiser HD650 + Microlab Solo 2 mk3. 

New Listening Test

Reply #196
Can we please keep the discussion about the listening test and not discuss opinions that require one?

@Sebastian

Perhaps for clarity it is better to start a new thread with an appropiate title (e.g. upcoming 48 kbps listening test - discussion). I think it would draw refreshed attention to it.
"We cannot win against obsession. They care, we don't. They win."


New Listening Test

Reply #198
Hosting test results can not be a violation of a license that you did not accepted, IMO.
To me the workaround is simple: the person conducing the test should not accept this EULA, and so should ask someone else to provide the wma encoded/decoded samples.


Sounds good, unless the beta eula also forbids distributing encoded music.

Alternate ending: compile results.  If Msoft doesn't allow showing them, put it that way:

"Microsoft has seen how WMA stacks up against the others, and doesn't want it revealed."

New Listening Test

Reply #199
I am definitely not going to use such tricks and stuff like person X is conducting the test, Y is encoding the samples and Z is publishing the results. WMA is going to be included and results will be disclosed if MS agrees and if not, shit happens.

What I think it sucks is that they still have the WMA vs. HE-AAC page up where they tell the people how great WMA performed, but now they don't let someone proof this. Also, someone from MS said in a newsgroup that these beta versions are work-in-progress products and cannot be compared to final versions - yeah, but using these betas to show off against HE-AAC is perfectly fine. How fair is that?