Here is a new article (http://seanolive.blogspot.com/2010/02/evaluating-sound-quality-of-ipod-music.html)I posted about issues in designing listening tests on Ipod Music Stations with an illustrative video attached.
In parts 2 and 3, I intend to show some subjective and objective results of some recent tests we performed on this category of products.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
thanks should be interesting.
Thank you! Parts 2 & 3 will be much more interesting for me. I think quite a few here know a little bit about the issues talked about in theory, but lack the resources to conduct such thorough evaluations themselves.
Sean, not to be a PITA but please don't feed the idea that the IPOD is a poor quality device as you *may* have done in your comments. There are people (me included) who have done blind tests with PMPs versus "better" systems and concluded that the PMPs are not the weak spot. They are basically audibly transparent, as they should be.
I am very curious about your results and am, as always, very happy that you are doing such experiments.
Sean, not to be a PITA but please don't feed the idea that the IPOD is a poor quality device as you *may* have done in your comments. There are people (me included) who have done blind tests with PMPs versus "better" systems and concluded that the PMPs are not the weak spot. They are basically audibly transparent, as they should be.
I am very curious about your results and am, as always, very happy that you are doing such experiments.
I haven't done any listening tests on Ipod or other PMP devices themselves, but I would tend to agree with you that they are not the weak spot in these tests. I didn't mean to imply they are.
I often hear people (even in our own company) disparage Ipod or "MP3" without understanding that it's the quality of the CODEC/bit-rate that determines how good they sound, and at the higher bit-rates they can sound 100% transparent.
Unfortunately there are a lot of people who blindly believe that things like this are necessary:
http://blog.stereophile.com/ces2008/010708wadia/ (http://blog.stereophile.com/ces2008/010708wadia/)
And talk about nonsensical:
Wadia's $349 iTransport can take the digital signal out of an iPod before the DAC, outputting 16-bit/44.1khz resolution for uncompressed files—it doesn't upconvert lower-rez files like MP3s, but it does reformat them to 16/44.1, according to Wadia's John Schaffer.
Usually when audiophiles talk nonsense they try to mask it with "quantum wording", but here the guy is not even making an effort!
At least they are not lying, but just use some queer wording. The iTransport actually bypasses the iPod's DAC and accesses the iPod's files in USB host mode. It copies them to its internal memory where they are decoded ("reformatted" to 16/44.1) and fed into the iTransport's digital output, which has its own clock source.
At least they are not lying, but just use some queer wording. The iTransport actually bypasses the iPod's DAC and accesses the iPod's files in USB host mode. It copies them to its internal memory where they are decoded ("reformatted" to 16/44.1) and fed into the iTransport's digital output, which has its own clock source.
Interesting. So the Wadia is essentially working like a PC that is playing music from an iPod that is being used as a storage device.
I took a peek at the Stereophile review and see that there are actually no technical tests other than for bit-perfect reproduction. They seem to be taking the view that bit-perfect copies are sufficient for sonic accuracy. There is an opporutity to involve the jitter myth, but they seem to have sloughed it. Should I cheer? ;-)
Here is a new article (http://seanolive.blogspot.com/2010/02/evaluating-sound-quality-of-ipod-music.html)I posted about issues in designing listening tests on Ipod Music Stations with an illustrative video attached.
In parts 2 and 3, I intend to show some subjective and objective results of some recent tests we performed on this category of products.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
I finally posted Part 2 (http://seanolive.blogspot.com/2010/04/evaluating-sound-quality-of-ipod-music.html) of this article which summarizes some recent competitive benchmarking listening tests on three popular Ipod Music Stations.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
Here is a new article (http://seanolive.blogspot.com/2010/02/evaluating-sound-quality-of-ipod-music.html)I posted about issues in designing listening tests on Ipod Music Stations with an illustrative video attached.
In parts 2 and 3, I intend to show some subjective and objective results of some recent tests we performed on this category of products.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
I finally posted Part 2 (http://seanolive.blogspot.com/2010/04/evaluating-sound-quality-of-ipod-music.html) of this article which summarizes some recent competitive benchmarking listening tests on three popular Ipod Music Stations.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
Part 3 - Objective Measurements (http://seanolive.blogspot.com/2010/05/evaluating-sound-quality-of-ipod-music.html) - was posted today. In Part 3 I present the anechoic and in-room measurements to see which ones best explain listeners' sound quality ratings of the Music Stations.
Cheers
Sean Olive
Audio Musings (http://www.seanolive.blogspot.com)
Thanks for another great article!
It's quite interesting how the listeners found Device A a little too bright even with the high freq room attenuation. (I believe the system at Harman reference listening room has a high freq attenuation about -3dB @ 10kHz & -6dB @ 20kHz?)
Thank you Sean. That last part is especially interesting - maybe our ears+brain "pick-apart" the sound field in a real listening room, so that (at least at higher frequencies) we are able to largely ignore the contributions from the room, and hear the quality of the direct sound.
I'm not sure everyone puts iPod docks up against the wall. One of the biggest problems with speakers for "normal" people is how much the sound changes depending on location. A simple "up a wall / not up a wall" switch on the speaker would be good.
Finally, I would debate how well you can "level match" such different sources. Maybe trained listeners can hear past this, but it's a strong source of bias in a direct switching situation IMO. Another possible version of the test would be to give the users + and - buttons to change the volume of each device separately (no scale, so no bias). This may be far more confusing for test subjects, and may make the task harder, but is far more representative of normal listening.
A separate test (which you did objectively, not subjectively) is how loud they go, and how soon they sound "nasty" when played loud. This is very important for some users.
(I'm sure you know all this - I know you've started with a test that hits the most fundamental issues first - I find it amazing that in 2010, such a test is so rare and almost grounding breaking - we should have been doing things at least this well for the last couple of decades - how did the industry go so wrong?)
Cheers,
David.
Thanks for another great article!
It's quite interesting how the listeners found Device A a little too bright even with the high freq room attenuation. (I believe the system at Harman reference listening room has a high freq attenuation about -3dB @ 10kHz & -6dB @ 20kHz?)
Thanks! I think the brightness of Music Station A has more to do with the slight broadband upward spectral tilt in the direct sound rather than the absorption characteristics of the listening room.
Cheers
Sean
Audio Musings (http://seanolive.blogspot.com)
Thank you Sean. That last part is especially interesting - maybe our ears+brain "pick-apart" the sound field in a real listening room, so that (at least at higher frequencies) we are able to largely ignore the contributions from the room, and hear the quality of the direct sound.
I'm not sure everyone puts iPod docks up against the wall. One of the biggest problems with speakers for "normal" people is how much the sound changes depending on location. A simple "up a wall / not up a wall" switch on the speaker would be good.
Finally, I would debate how well you can "level match" such different sources. Maybe trained listeners can hear past this, but it's a strong source of bias in a direct switching situation IMO. Another possible version of the test would be to give the users + and - buttons to change the volume of each device separately (no scale, so no bias). This may be far more confusing for test subjects, and may make the task harder, but is far more representative of normal listening.
A separate test (which you did objectively, not subjectively) is how loud they go, and how soon they sound "nasty" when played loud. This is very important for some users.
(I'm sure you know all this - I know you've started with a test that hits the most fundamental issues first - I find it amazing that in 2010, such a test is so rare and almost grounding breaking - we should have been doing things at least this well for the last couple of decades - how did the industry go so wrong?)
Cheers,
David.
Thanks David. I agree with you that these products should have a 2 or 3-position switch on them that compensates for placement next to a boundary or 4 pi setting. There should also be an "Apple Store" setting to help sell the product
Loudness matching among different loudspeaker products is always a compromise. If the frequency responses are similar you can get very close with very little program-dependency. Otherwise, the loudness will slightly change depending on the differences in frequency responses of the products under test and the spectrum of the program.
We use uncorrelated pink noise and measure at the listening position and adjust until we get the same SPL (B-weighted, slow). More recently we've been using a CRC loudness meter-- which seems to give good results. One thing I have considered but not implemented is a separate loudness matching for each program... I don't think it would matter much for our standard programs because they all have reasonably broadband spectra.
We do allow listeners to adjust the volume for automotive audio system evaluation to judge differences in dynamics, distortion,etc. We are working on a similar test for products like this.
Cheers
Sean
Audio Musings (http://seanolive.blogspot.com)
We use uncorrelated pink noise and measure at the listening position and adjust until we get the same SPL (B-weighted, slow). More recently we've been using a CRC loudness meter-- which seems to give good results. One thing I have considered but not implemented is a separate loudness matching for each program... I don't think it would matter much for our standard programs because they all have reasonably broadband spectra.
I wonder where one obtains a CRC loudness meter download?
As close as I've been able to find is this one:
Orban Loudness meter with ITU 1770 metering feature download (http://www.orban.com/meter/)
We use uncorrelated pink noise and measure at the listening position and adjust until we get the same SPL (B-weighted, slow). More recently we've been using a CRC loudness meter-- which seems to give good results. One thing I have considered but not implemented is a separate loudness matching for each program... I don't think it would matter much for our standard programs because they all have reasonably broadband spectra.
I wonder where one obtains a CRC loudness meter download?
As close as I've been able to find is this one:
Orban Loudness meter with ITU 1770 metering feature download (http://www.orban.com/meter/)
I contacted CRC directly, since I know the people who developed it, and they gave me a copy, as a favor. I think they might have plans selling it, along with a multichannel version.
Cheers
Sean
Audio Musings (http://seanolive.blogspot.com)