Maybe there should be some extra testing done about GT2 vs. Post-1.0-CVS Vorbis at 128kbps before deciding which one to use for the big 128kbps codec shoot out.dev0
AACenc = Sorenson ?
General Personal PreviousQuicktime 4.42 3.7 4.2Ahead Nero 3.81 1.4 2.1 *PsyTEL 4.25 2.9 1.8Sorenson 4.26 2.5 2.5FAAC 3.92 2.0 --- **
1) Any speculation as to why ATrain and Layla were so easy for AAC? Surely they were chosen because the samples were challenging to MP3, Ogg and MPC. (I'm guessng the next test will address issues of what samples are handled relatively better or worse by the codecs.)
Just a little typo in the last line: "Sorenson is good, but it's price is prohibitive." It should be "its" which is possessive.
This may have been addressed a million times elsewhere, but how is AACenc illegal? Is this the encoder used with PsytelDrop? I have it but don't recall where I downloaded it. Didn't realize it was illegal. (Warez?) Or is it just a licensing issue?
the sample is from the song "#41" by the dave matthews band from the album crash.allmusic link: http://www.allmusic.com/cg/amg.dll?p=amg&u...l=A4c6tk6dxqkrfregards; ilikedirt
minor nitpick with track info:It should be "You've Got the Love" by The Source feat. Candi Staton (Although a popular dance tune and available on many compilations I don't know if the track was ever featured on an actual album by The Source)
So, what is Dolby's AAC codec? An improved FhG codec for both quality and speed, as the listening test results indicate?
1. I think classical and jazz could have been better represented.
2. It should be noted somewhere, probably in the recommendations section, that this was a CBR test only, and that Nero and Psytel also have VBR modes, which perform better, according to Guruboolez. You might link to his listening results.
3. The crack about people advertising for FAAC is unneccessary. and doesn't help you win over a certain enthusiast to participate in your next test.
4. You mention that you used an ANOVA analysis, but maybe you should also mention that this is different from what the 64 kbit/s test used. The similar presentation format might make people think that all the analysis was identical. The difference is mainly one about risk. The ANOVA / Fisher LSD method is more at risk for falsely identifying differences between codecs. On the other hand, it's more sensitive than the Tukey HSD.
5. I'm still uncomfortable with the squishy way that a summary graph is constructed. But since I can't think of a better way, and people have a need to see things in one, concise picture, I suppose it must be that way.
6. In the more detailed pages to follow, I'd like to see some mention about how a time misalignment of only 25 msec spoiled at least one result. Also, I'd like to see some mention of the results you threw out for rating the original less than 5.
1. Perhaps another call for samples -- classical and jazz samples -- would be profitable.
2. You might think about adding at least one anchor sample -- a lowpassed version of the original, a la MUSHRA. This can be done with a small filesize penalty using Sox. That would help to keep the ratings in perspective.
3. Verifying VBR average bitrates: I think that this task could be split up among several people, each encoding whole albums with all codecs.
Edit: Oh, and if iTunes doesn't use the same codec that you used for this test, I would make some mention of that fact too.
Edit2: The next test you'll probably want to be sure to check for level (volume) differences too.
Thanks.2 more pieces of info: who submitted it (if it's known), and what's the style?
Well, that's OK, but I planned to use the same test suite, even with the samples that ended up "too transparent". Else, if I change the suite too much, whiners will say there's no significance between the first test and the extension. "Who knows if QuickTime would win in this new suite? neener-neener"
Can't really explain why. It is because I'm most familiar with harpsichord, and can't bear any distorsions ?
1) Any speculation as to why ATrain and Layla were so easy for AAC? Surely they were chosen because the samples were challenging to MP3, Ogg and MPC. (I'm guessing the next test will address issues of what samples are handled relatively better or worse by the codecs.)
2) What encoder does AOL use for the AAC tracks streamed via Radio@AOL?
3) One guesses that Apple is devoting considerable resources to further development. Are Ahead and/or Sorenson doing sufficient work to make one expect substantial improvements in their offerings? [Afterwards addition: OK - Ivan's quick response and comments suggest Ahead is -- tnx, Ivan!] Is there another developer readying an offering? What prevents Dolby from doing so?
4) Do any or all of these codecs provide enhanced capacities for "digital rights management" vs. what wma already has and what mpc and ogg can offer? Or is that a matter dependent solely on the OS platform?Adding an off-topic rant, I wish there were a way to build support for MPC. "As flies to wanton boys are we to the gods, they kill us for the sport," is my impression of the corporate Olympians' treatment of the public. "The public be damned, we'll tell them what they want," is the way media and software conglomerates are handling the evolution of codecs.
QuoteCan't really explain why. It is because I'm most familiar with harpsichord, and can't bear any distorsions ?Cradle Of Filth's harpsichord naturally comes from a synthesizer so there may be differences, whether surprising or subtle, with real harpsichord music.
Quote5. I'm still uncomfortable with the squishy way that a summary graph is constructed. But since I can't think of a better way, and people have a need to see things in one, concise picture, I suppose it must be that way.Well, I have no clue about statistics. If you have any idea to fix that, please inform me.
2 more pieces of info: who submitted it (if it's known), and what's the style?