Skip to main content
Topic: "Apple should have gone with Ogg Vorbis" (Read 2067 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

"Apple should have gone with Ogg Vorbis"

Just when we thought the world knew better than to listen with their eyes...

Here's a post I came across on Winamp's General Discussions forum.  I know this kind of thing has gone round-and-round, but I just haven't seen an instance of it in a while, and thought I'd "share the moment". 

------------------------------------------------------------------------------------------
Apple should have gone with Ogg Vorbis

Using a frequency analysis program (Spectrogram), as well as my own ears, I've discovered that Ogg Vorbis sounds far better than AAC and also has way more treble. AAC at 128kbps has a sharp cutoff at 15.8KHz; absolutely NOTHING comes through above that point. Ogg Vorbis has a sharp cutoff at 20.7KHz, but that's still plenty high enough to sound really good. 112kbps Ogg has a cutoff at 17.9KHz, but that's still better than AAC!

------------------------------------------------------------------------------------------

That a spectrogram was used to conclusively determine that a format sounds better is one thing, but what struck me the most was that this was specifically used to recommend that Apple had selected a different encoding format (while the results of double-blind listening tests wouldn't).


"Apple should have gone with Ogg Vorbis"

Reply #1
This brings an interesting question.  Would it be unethical to use a frequency spectrum to identify losses, and then train your ear to hear said losses?

"Apple should have gone with Ogg Vorbis"

Reply #2
I don't think you can easily "see" psychoacoustic losses on a spectral analysis, not in a way that would tell you what they'd sound like.  Maybe if you trained specifically for that, but I'd think the time investment wouldn't be worth a result that could be better obtained other ways (double-blind listening tests).

"Apple should have gone with Ogg Vorbis"

Reply #3
I think it's funny that the luser reported Ogg lowpassing at 20.7 kHz, and didn't mention which setting he used. Furthermore, the 20.7kHz cutoff, if not just the highest frequency that you'll find distortion at, is probably where the engineers lowpassed the master recording for CD production.

"Apple should have gone with Ogg Vorbis"

Reply #4
Quote
This brings an interesting question.  Would it be unethical to use a frequency spectrum to identify losses, and then train your ear to hear said losses?

It takes a lot longer to look than to listen! And, as ScorLibran said, lots of artifacts are almost "invisible". When you hear the artefact, you can often find a graph to show it. But spotting artifacts just from looking graphs isn't so easy.


You can subtract the original from the coded version, and just listen to the coding nosie to "learn" what it sounds like. It can help, but most of the time it's just misleading! It's usually best to sit there with A (original) and B (coded) in ABX and listen carefully to both.

Cheers,
David.

 
SimplePortal 1.0.0 RC1 © 2008-2019