Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: What's the problem with double-blind testing? (Read 256366 times) previous topic - next topic
0 Members and 2 Guests are viewing this topic.

What's the problem with double-blind testing?

Reply #50
Quote
I do not mean that  I "hear" less of a difference. What I mean is that the differences that I hear during analytical mode don't mean as much to me at that moment compared to when I listen to the music for enjoyment. When enjoying the music sometimes I think that I am tired of the song that I am listening to or feel bored but get more emotional or even cry when listening to the original. The analogy is exactly like listening to music in a different mental state. try listening when you are drunk and sober. The experience is different. Just like listening when you are analyzing isnt as  emotional as listening deeply to the soul of the music. At the time you are listening emotionally, you are not pinpointing the artifacts. The music speaks for itself. Sort of like placebo but this time it is not expectation bias, Its just the way it is whether or not I know which source cd is playing. To ramble even more, Its just like listening to music with different stereo systems. one is a more emotional experience, one is dull and boring. I hope you got what I meant. Its audiophile hocus pocus at work

But the problem is that this is not blinded. You KNOW you have put the CD in the tray. You KNOW what gear you are using. While I'm not gonna deny you your goosebumps caused by this, this is an emotional factor that lies indeed outside of the scope an ABX test. Sorry, but I don't I get those fuzzy feeling when listening to 10.000$ equipment, I would be constanly worried if what I bought was really worth the price and with that degrading my listening experience.
"We cannot win against obsession. They care, we don't. They win."

What's the problem with double-blind testing?

Reply #51
I recall, we have had successful abx tests even here at ha in the high end area. The guy, who abxed 1 trial each morning, when his ears were fresh.
You don't need special software,
instead of software you need then a helper person, who burns a CD with first 2 files as references, AB, you know, which is which, and then other tracks as several X in random order, but the helper writes somewhere down, which is which.
Then you can play with this cd, and if you think, you are finished, the helper can reveal you the results.




I like the University of Music in Detmold, Germany.
How they carry out abx tests in diploma thesis.
And that they write directly in thier papers against te marketing hypes, now they revealed, no perceivable difference between DSD and 192/24 PCM.
hm, would be interesting also, that they test DSD vs. 96/24 and maybe properly dithered 16/44...

I presented some time ago thier results about testing possible differences between  16/48, 24/48, 24/96 music.
It was shown, that with some probability 24/48 is betetr than 16/48. And that 24/96 does not improve anthing comapred to 24/48. Unfortuentaly, they did that test, when 24/96 dacs were very fresh and not yet optimized.
Their very good 24/48 dacs performed better than 24/96 at that time, their test showed it.
They wrote in the paper also, that the limit of perceivable difference is probably at 20 bit, so 24 bit is overkill already. They referred to another paper for this statement. Unfortunately, this paper was only in german available, and it had limited value, as the 24/96 dacs had not yet their fully potential, and 16/44 was not included in that test as low anchor.

What's the problem with double-blind testing?

Reply #52
Quote
I recall, we have had successful abx tests even here at ha in the high end area. The guy, who abxed 1 trial each morning, when his ears were fresh.[a href="index.php?act=findpost&pid=336189"][{POST_SNAPBACK}][/a]


It was not here, but in another english forum. It was a 16 bits truncated vs 16 bits dihered test. I recall the sample being guitar only, with little dynamics, and no fade-out.
It was an interesting success.

It also took me a very long time to confirm Xerophase success with MPC standard on the Smashing Pumpkins sample. Not one trial per day, but several hours for the whole ABX test, as far as I remember. It is still online here, search for Xerophase's first posts in hydrogenaudio.

Quote
I presented some time ago thier results about testing possible differences between  16/48, 24/48, 24/96 music.
It was shown, that with some probability 24/48 is betetr than 16/48. [a href="index.php?act=findpost&pid=336189"][{POST_SNAPBACK}][/a]


I remeber that we couldn't figure out their protocol, nor the "probability" of their result, because it was all in german. Unfortunately, the original webpage is now offline.

What's the problem with double-blind testing?

Reply #53
Quote
Quote
Quote
To tell you the truth, I cannot understand how someone could read and understand the two papers I linked to in the following thread and still be convinced that DBT can be the final word on music reproduction. In any event, it is a very interesting field of science that I suspect is not well known to many HA members:

http://www.hydrogenaudio.org/forums/index....ndpost&p=321782
[a href="index.php?act=findpost&pid=335992"][{POST_SNAPBACK}][/a]


I have not read them. It seems that they deal with the effect of unconcious stimuli. Your point would be that sounds unconciously perceived might affect our perception of music.

But I don't see how this could explain the disparition of all "audiophile effects" under blind listening conditions. The unconcious stimuli don't disappear when the test is blind, so why should their effects disappear ?
[a href="index.php?act=findpost&pid=336039"][{POST_SNAPBACK}][/a]


It's not that any information disappears. The problem is that the ABX paradigm relies on cognitive processes that might not be directly affected by subtle differences between two presented stimuli, but other aspects of the listening experience might be. Because of the increased effort required of perceptual processes, due to compressed information that must be resolved, listening experiences might have an uncomfortable element associated with them that uncompressed stimulus perception doesn't. An ABX paradigm might not be able to tap into that very well, if at all (because the effort is perceptual, not in a decision processs). But there are other methodologies that could measure whatever processing differences might exist.
[a href="index.php?act=findpost&pid=336072"][{POST_SNAPBACK}][/a]

Seems to be a lack of understanding of something fundamental here: if you encode something to a lossy format it *does* lose information, it's not "compressed" at all. You've actually thrown away a lot of information! How do you know that the auditory system doesn't find it *easier* to decode since there's less "data"? Maybe also it makes no difference. If you can't ABX the difference then it's possible that you actually took away the "data" that you genuinely couldn't hear.

I'd be prepared to believe that an ABX test might be unreliable if the circumstances of the listening varied on each test and the difference was hovering around a level that you could genuinely distinguish. However, your results would tell you exactly that - that your test was statistically unsound.

What's the problem with double-blind testing?

Reply #54
Quote
Seems to be a lack of understanding of something fundamental here: if you encode something to a lossy format it *does* lose information, it's not "compressed" at all. You've actually thrown away a lot of information! How do you know that the auditory system doesn't find it *easier* to decode since there's less "data"?


Lossless and lossy compression schemes are just that, compression. It is in this sense that I mean that an mp3 file, for example, is compressed..

The question of whether perceptual systems have an easier or harder time resolving stimuli that are impoverished (relative to a complementary uncompressed stimulus) is an empirical one that has been addressed in a variety of ways in perception research (mostly vision). I've been talking about ways to address this specifically with audio lossy schemes, and I'm presenting an idea regarding how this effect might not be addressed fully with an ABX paradigm that relies on decision tasks. A reaction time paradigm might be better suited to study the processing differences listeners likely experience.

The reason compressed data requires more effort on the part of whatever computational system is perceiving it, is because more inferential processing is required. If you are missing data in a signal, some mechanism in your brain must essentially interpolate, from the signal, the missing information.

What's the problem with double-blind testing?

Reply #55
Quote
The reason compressed data requires more effort on the part of whatever computational system is perceiving it, is because more inferential processing is required. If you are missing data in a signal, some mechanism in your brain must essentially interpolate, from the signal, the missing information.
[a href="index.php?act=findpost&pid=336296"][{POST_SNAPBACK}][/a]

Why? How can our brain even tell there is something missing? And why MUST it interpolate this data?

These are again merely assumptions on your part.
"We cannot win against obsession. They care, we don't. They win."

What's the problem with double-blind testing?

Reply #56
Quote
Quote
Seems to be a lack of understanding of something fundamental here: if you encode something to a lossy format it *does* lose information, it's not "compressed" at all. You've actually thrown away a lot of information! How do you know that the auditory system doesn't find it *easier* to decode since there's less "data"?


Lossless and lossy compression schemes are just that, compression. It is in this sense that I mean that an mp3 file, for example, is compressed..

The question of whether perceptual systems have an easier or harder time resolving stimuli that are impoverished (relative to a complementary uncompressed stimulus) is an empirical one that has been addressed in a variety of ways in perception research (mostly vision). I've been talking about ways to address this specifically with audio lossy schemes, and I'm presenting an idea regarding how this effect might not be addressed fully with an ABX paradigm that relies on decision tasks. A reaction time paradigm might be better suited to study the processing differences listeners likely experience.

The reason compressed data requires more effort on the part of whatever computational system is perceiving it, is because more inferential processing is required. If you are missing data in a signal, some mechanism in your brain must essentially interpolate, from the signal, the missing information.
[a href="index.php?act=findpost&pid=336296"][{POST_SNAPBACK}][/a]

This still does not show any evidence that this applies to lossy audiocompression which makes use of psychoacoustic models. It may apply, or it may not apply.

Why does it not necessarily apply? Because the purpose of lossy audio compression is to throw away unperceivable parts of the audio. So if the sensors aren't missing anything, then why should it make any difference? Anyways, discussing this wont really solve it because we simply lack DATA..... we need evidence and analysis..... not extrapolated from other fields.....  else we are just speculating.

I definatelly dont think that current test-methods are the end of the road. And i dont think that current DBT-implementations are perfect. But i do think that they're the best ones we have available right now. I am interested in better methods and flaws in current methods, but only if its backed by hard evidence - not extrapolated speculations ala "well, it could be possible that....". Why the hard line? Well, i've seen too many of those "speculations" which in the past ended up being exposed as flawed. Show me something significant and applicable and i'll become interested.
I am arrogant and I can afford it because I deliver.

What's the problem with double-blind testing?

Reply #57
Quote
Quote
I recall, we have had successful abx tests even here at ha in the high end area. The guy, who abxed 1 trial each morning, when his ears were fresh.[{POST_SNAPBACK}][/a]


It was not here, but in another english forum. It was a 16 bits truncated vs 16 bits dihered test. I recall the sample being guitar only, with little dynamics, and no fade-out.
It was an interesting success.
[a href="index.php?act=findpost&pid=336237"][{POST_SNAPBACK}][/a]


Results of this test are described here:

[a href="http://ff123.net/24bit/24bitanalysis.html]http://ff123.net/24bit/24bitanalysis.html[/url]

What's the problem with double-blind testing?

Reply #58
Quote
Quote
The reason compressed data requires more effort on the part of whatever computational system is perceiving it, is because more inferential processing is required. If you are missing data in a signal, some mechanism in your brain must essentially interpolate, from the signal, the missing information.
[a href="index.php?act=findpost&pid=336296"][{POST_SNAPBACK}][/a]

Why? How can our brain even tell there is something missing? And why MUST it interpolate this data?

These are again merely assumptions on your part.
[a href="index.php?act=findpost&pid=336310"][{POST_SNAPBACK}][/a]


It's not that our brains "know" there is something missing, it's that relative to a less noisy signal, the brain has to do more work in order to generate the best representation of the sound it can.

I will post studies that support a number of things I have claimed, including quite notably the neural dissociation of sensory and decision processes. The "assumptions" I'm making are rooted in my training as a cognitive psychologist who studies speech processing. Many of the things I've said are well supported in experimental studies...I'm not just making it up. Also, what I'm saying is testable. I'm just trying to explain the logic of why I believe a different methodology might be necessary to rescue at least some of the claims made by audiophiles. I'm not an audiophile.

Quote
Why does it not necessarily apply? Because the purpose of lossy audio compression is to throw away unperceivable parts of the audio. So if the sensors aren't missing anything, then why should it make any difference?


Again, the main point here is that fatigue effects caused by processing issues (not accessible to systems that verbally report differences in ABX listening tests) could have residual effects on listeners in more long term ways. Reaction time experiments could verify this processing claim I'm making, and a positive result (where an ABX test shows nothing) could indicate a relationship between perceptual processing and vague ideas of discomfort experienced by some listeners of compressed audio.

What's the problem with double-blind testing?

Reply #59
Quote
Quote

Why? How can our brain even tell there is something missing? And why MUST it interpolate this data?

These are again merely assumptions on your part.
[a href="index.php?act=findpost&pid=336310"][{POST_SNAPBACK}][/a]


It's not that our brains "know" there is something missing, it's that relative to a less noisy signal, the brain has to do more work in order to generate the best representation of the sound it can.

Now you are saying something completely different. I don't see how our brain would try to represent anything else than what it is perceiving. I don't see how a wave form produced by an mp3 decoder is conceptually different from one might encounter in other situations. But ok, I will wait for studies.
"We cannot win against obsession. They care, we don't. They win."

 

What's the problem with double-blind testing?

Reply #60
Quote
So we aren't wired to detect "non-events" over events in general. For predator detection, yes...but not for something like, for example, detecting cheaters in a card game!
[a href="index.php?act=findpost&pid=336117"][{POST_SNAPBACK}][/a]


Really, now, would you like to show your evidence?

The evidence that partial loudness differences are overdetected has been in for years.

All auditory stimulii start out as a set of partial loudnesses, as expressed as pulse position modulation in the auditory nerves.

If you're right, this would require a complete revision of the entire understanding of how the human auditory system works.  Have you this revision prepared yet?
-----
J. D. (jj) Johnston

What's the problem with double-blind testing?

Reply #61
Quote
Sorry, this is not even true. I would like you to show me how you calculate that an 128 kbps mp3 has 90% less information (define information too please) than the original PCM data. Then show me that our auditory system must derive more from a decoded mp3 than from the original PCM AND explain exactly what it is that is being derived. If anything, it seems to me the opposite must be true and working with less data is also is less fatiguing
[a href="index.php?act=findpost&pid=336175"][{POST_SNAPBACK}][/a]


For information, Shannon Entropy will do.

Of course, since MP3 is a middling source coder on top of its perceptual encoding, there's a lot more of the original "information" in the signal than most people realize, simply because it is also a source coder.

I've stayed out of this whole "waveform" thing, because, frankly, it seems based entirely on a heap of confusion in the first place, but at this point, somebody claiming that the ear has to "make up data" just simply denies everything we know about how the auditory system works.

It doesn't "make up" anything, it detects, in a very lossy and mathematically imperfect fashion, whatever is there.  Since that's a lossy fashion, sometimes it imagines that things are there when they aren't, but that's not "making up the missing information" at all.
-----
J. D. (jj) Johnston

What's the problem with double-blind testing?

Reply #62
Quote
Seems to be a lack of understanding of something fundamental here: if you encode something to a lossy format it *does* lose information, it's not "compressed" at all.
...
[a href="index.php?act=findpost&pid=336251"][{POST_SNAPBACK}][/a]


I agree that information is lost, no doubt. I must, though, point out that the mere act of frequency analysis also provides some "compression" in the LMS sense, of course that is undone by the decoder, not by the ear, something our earstwhile correspondent seems to forget about.
-----
J. D. (jj) Johnston

What's the problem with double-blind testing?

Reply #63
Quote
I recall, we have had successful abx tests even here at ha in the high end area. The guy, who abxed 1 trial each morning, when his ears were fresh.
You don't need special software,
instead of software you need then a helper person, who burns a CD with first 2 files as references, AB, you know, which is which, and then other tracks as several X in random order, but the helper writes somewhere down, which is which.
Then you can play with this cd, and if you think, you are finished, the helper can reveal you the results.




I like the University of Music in Detmold, Germany.
How they carry out abx tests in diploma thesis.
And that they write directly in thier papers against te marketing hypes, now they revealed, no perceivable difference between DSD and 192/24 PCM.


If you're talking about the same paper I am, four out of 145 people passed an ABX (which was defined as at least 15/20 correct).  Those four passed only when using headphones to compare stereo material (SACD vs. 24-bit/176.4 kHz DVD-A).  They each passed using a different musical program.  There was some question as to whether they might have been unconsciously cued by differences in switching noises. 

No one passed an ABX using surround-sound material.

http://www.hfm-detmold.de/eti/projekte/dip..._paper_6086.pdf

The full thesis in German is somewhere at

http://www.hfm-detmold.de/hochschule/eti.html

What's the problem with double-blind testing?

Reply #64
Quote
Quote
Seems to be a lack of understanding of something fundamental here: if you encode something to a lossy format it *does* lose information, it's not "compressed" at all. You've actually thrown away a lot of information! How do you know that the auditory system doesn't find it *easier* to decode since there's less "data"?


Lossless and lossy compression schemes are just that, compression. It is in this sense that I mean that an mp3 file, for example, is compressed..

The question of whether perceptual systems have an easier or harder time resolving stimuli that are impoverished (relative to a complementary uncompressed stimulus) is an empirical one that has been addressed in a variety of ways in perception research (mostly vision).



People can easily parse sentences with 'degraded' content, such as sentences where 'the the' appears, as well as other typos involving *missing* words or data.
Has it been determined what level of degradation has to be reached before the *time* it takes to parse a sentence is affected?  If two tasks take an effectively indistinguishable amount of time for the same person, then one cannot be said to be more 'difficult' than another, can it?  The only other measure of 'difficulty' I could imagine would be something like, the number of neurons engaged, or the amount of blood flow involved.

Quote
I've been talking about ways to address this specifically with audio lossy schemes, and I'm presenting an idea regarding how this effect might not be addressed fully with an ABX paradigm that relies on decision tasks. A reaction time paradigm might be better suited to study the processing differences listeners likely experience.


You haven't demonstrated that it's even an *issue* for lossy schemes. Horses before carts, please.

Ans since we seem still to be in the realm of the suppositional, suppose perception is a matter of stimuli crossing sensory and cognitive *thresholds*.  It could be that as long as the 'lossy' (lossy only in comparison to the original stimulus) stimulus still crosses the right thresholds, it's all the same to the brain.
This would mean that the 'best' 'lossy' representations' are simply those that are *good enough* -- and they don't require any more 'effort' to process.

What's the problem with double-blind testing?

Reply #65
Quote
Quote
Quote

Why? How can our brain even tell there is something missing? And why MUST it interpolate this data?

These are again merely assumptions on your part.
[a href="index.php?act=findpost&pid=336310"][{POST_SNAPBACK}][/a]


It's not that our brains "know" there is something missing, it's that relative to a less noisy signal, the brain has to do more work in order to generate the best representation of the sound it can.

Now you are saying something completely different. I don't see how our brain would try to represent anything else than what it is perceiving. I don't see how a wave form produced by an mp3 decoder is conceptually different from one might encounter in other situations. But ok, I will wait for studies.
[a href="index.php?act=findpost&pid=336351"][{POST_SNAPBACK}][/a]

You have to keep in mind how MP3 encoders (and other lossy encoders) work. To say they "throw away" information is true, but it doesn't make it clear what they are really doing. What they do is reduce the resolution at certain frequencies and times (which is how they save space) and this has the effect of adding quantization noise to the signal. The idea is to essentially add noise to the signal as close as you can get to (but still below) the threshold where it's directly, consciously, audible. What's surprising is that this can be a relatively high level of noise (which is why these lossy codecs work so well). You can easily do a test where you subtract an original music signal from its encoded version. You might be surprised how much noise is added.

So, if we're adding noise to a signal that someone is trying to interpret, it's pretty obvious that it's going to require more processing. Do you find it easier to understand speech in a crowded bar or a quiet room? This is why they put up acoustic panels in auditoriums; too much reverberation acts like noise and makes speech more difficult to understand. This isn't really debatable.

What is in question is whether or not noise that is below the threshold where you can actually hear it still has an effect on processing, and what research is showing over and over is that whether or not something is directly perceived has very little to do with the underlying processing. The brain is processing the sounds into a "reality" for your conscious mind to act on, and does not want to bother you with unimportant details that might distract you. What if every stimuli registered by your senses was consciously perceived!!

Imagine walking blindfolded into a large room with sound reflective walls. You instantly become aware of the rough dimensions of the room, but you don't hear every echo of every footstep. A blind person would be able to "hear" in much greater detail, and in fact claim to be able to almost "see" the room. This is the kind of thing that the subconscious mind is constantly doing for us in the background, and what finally gets presented to our conscious mind is rather independent of what processing was required to achieve it. I admit that without at least a working knowledge of the literature, this may seem counterintuitive.

Another argument that was made is that if this extra processing automatic and we are not aware of it, then maybe it has no other effect. The problem with this is that the brain (and all parts of the body) don't work that way. The processing consumes resources (they actually measure this activity by measuring the radiation it generates) and all the body's systems try to minimize resource use. If I ask to you perform a math calculation in your head you would have to concentrate to do it (and you might refuse). If I asked you to do calculations over and over you would probably get tired and annoyed. Just because other processing is unconscious does not mean it's any less taxing, and the brain will always attempt to minimize it.

BTW, I'll clear up one minor error Greg made in his information comparison of a 128 kbps MP3 and the original wav. To compare the amount of real information stored you have to compare the size of the MP3 with a losslessly compressed wav, so it's really more like throwing away 80% of the information instead of 90%. This is more correct from an information theory standpoint.

The reason that I linked to the first of those two papers is that I find this field fascinating from a scientific viewpoint and I suspect that some HA members might be less aware of the richness of research discoveries in this area (compared to some other "harder" scientific realms).

The reason I linked to the second paper is that it directly relates to the auditory system, and points out a case where subconscious perception of a certain auditory stimuli turns out to be much more sensitive than what is consciously perceived. What could be more closely related to this discussion? I can imagine a version of the experiment where levels of noise might be added that, even when well below the threshold of conscious audibility, would have an effect on the measured subconscious perception of the timing. That would be the end of the discussion! That would prove that distortions below the threshold of conscious perception can effect how we hear music (at least to the somewhat open-minded).

Finally, and this was another motivation for me, there seems to be a tendancy here to oversimplify things that really are not that well understood. I am not trying to push any agenda and I am not an audiophile, but I am familiar enough with the scientific research to know that some of the claims often made here and taken as fact (at times in very condescending language) are simply not well supported at this time, and may in some cases turn out to be flatly wrong. I think it's fine if people have different opinions, and the free and lively interchange of ideas is what this board is all about. But I don't think it would hurt for a slightly more flexible attitude to prevail (and it would certainly reduce the possibility of many egg covered faces in the future).

Have a good weekend! 

edit: grammer

What's the problem with double-blind testing?

Reply #66
Quote
But I don't think it would hurt for a slightly more flexible attitude to prevail (and it would certainly reduce the possibility of many egg covered faces in the future).[a href="index.php?act=findpost&pid=336381"][{POST_SNAPBACK}][/a]


You are so absolutely right...

What's the problem with double-blind testing?

Reply #67
Quote
Quote
So we aren't wired to detect "non-events" over events in general. For predator detection, yes...but not for something like, for example, detecting cheaters in a card game!
[a href="index.php?act=findpost&pid=336117"][{POST_SNAPBACK}][/a]


Really, now, would you like to show your evidence?

The evidence that partial loudness differences are overdetected has been in for years.

All auditory stimulii start out as a set of partial loudnesses, as expressed as pulse position modulation in the auditory nerves.

If you're right, this would require a complete revision of the entire understanding of how the human auditory system works.  Have you this revision prepared yet?
[a href="index.php?act=findpost&pid=336364"][{POST_SNAPBACK}][/a]


When I said we aren't "wired" to detect "non-events" in general, I was referring to any phenomena that signal detection theory can be applied. This is a fundamental tenet of the theory; that is, the criterion location depends on the associated costs of different sorts of errors. How that manifests in various auditory contexts is quite variable depending what the task is. Also, it relates to attention and what the sound source is. 

I'm not making myself clear if you think the ideas I've presented require a revision of how hearing works. What I am proposing is a result of processing in the auditory cortex, not the ears, and is consistent with what we know about auditory perception. Your account of partial loudnesses doesn't really seem particularly relevant to the issue of whether processing differences between compressed and uncompressed audio could be measured in a way that ABX testing misses.

Quote
It doesn't "make up" anything, it detects, in a very lossy and mathematically imperfect fashion, whatever is there. Since that's a lossy fashion, sometimes it imagines that things are there when they aren't, but that's not "making up the missing information" at all.


My example of the missing fundamental phenomenon illustrates nicely how unconscious inference works. There are many other examples as well. Our brains construct a good deal of what we "perceive" based on what is often quite degraded information. For example, your visual experience is radically different than the information on your retina. Perception has a huge filling in component.

Quote
People can easily parse sentences with 'degraded' content, such as sentences where 'the the' appears, as well as other typos involving *missing* words or data.
Has it been determined what level of degradation has to be reached before the *time* it takes to parse a sentence is affected? If two tasks take an effectively indistinguishable amount of time for the same person, then one cannot be said to be more 'difficult' than another, can it? The only other measure of 'difficulty' I could imagine would be something like, the number of neurons engaged, or the amount of blood flow involved.


Actually, there is a ton of research on language processing using written sentences, and even something like two "the's" will slow down the system in a statistically significant way. What is often considered a significant amount of time in a cognitive sense is in reality quite fast (e.g., 100 ms. is a big effect), but people generally don't have a conscious sense of these sorts of time issues. Basically it boils down to neural activity. So two different tasks can seem indistinguishable to someone, but have radically different time courses. So yes, this issue is about brain activity, and because the brain is such an energy hog, most roads to easier processing are taken. In terms of the argument David and I are presenting here, the neural processing differences might manifest in long term fatigue effects that could be responsible for some people's discomfort with lossy formats.

One idea that many people seem to be resisting is the notion that very subtle processing differences can result in subtle long term feelings. There seems to be a tendency to overestimate the amount of information that is accessible to consciousness, and as a result, let conscious judgment be the last word on what is actually perceived. But this is definitely not the case as psychophysical researchers (including psychoacoustics people) have known for over a century.

Quote
You haven't demonstrated that it's even an *issue* for lossy schemes. Horses before carts, please.

Ans since we seem still to be in the realm of the suppositional, suppose perception is a matter of stimuli crossing sensory and cognitive *thresholds*. It could be that as long as the 'lossy' (lossy only in comparison to the original stimulus) stimulus still crosses the right thresholds, it's all the same to the brain.
This would mean that the 'best' 'lossy' representations' are simply those that are *good enough* -- and they don't require any more 'effort' to process.


I think I've presented a lot of converging evidence that suggests it could very well be an issue. Like I've said, it's a testable idea. David just presented one idea that would speak to it, and we have other ideas as well. You have to find a cart before the horse can pull it.

 

The threshold theory is just another version of the idea discussed earlier concerning why it is more labor intensive, so to speak, to perceive a degraded stimulus rather than a less noisy stimulus. David's example of listening to speech in a noisy bar is a good example. But I will throw up some references soon about neural processing and stimulus quality. As one should expect, degraded signals are more difficult to process.

What's the problem with double-blind testing?

Reply #68
Quote
So, if we're adding noise to a signal that someone is trying to interpret, it's pretty obvious that it's going to require more processing. Do you find it easier to understand speech in a crowded bar or a quiet room? This is why they put up acoustic panels in auditoriums; too much reverberation acts like noise and makes speech more difficult to understand. This isn't really debatable.

This is also what isn't debated. Audible differences lie within the scope of an ABX-test.

Quote
What is in question is whether or not noise that is below the threshold where you can actually hear it still has an effect on processing, and what research is showing over and over is that whether or not something is directly perceived has very little to do with the underlying processing. The brain is processing the sounds into a "reality" for your conscious mind to act on, and does not want to bother you with unimportant details that might distract you. What if every stimuli registered by your senses was consciously perceived!!

Well, yes, this is what psycho-acoustic models try to take advantage of.

Quote
Imagine walking blindfolded into a large room with sound reflective walls. You instantly become aware of the rough dimensions of the room, but you don't hear every echo of every footstep. A blind person would be able to "hear" in much greater detail, and in fact claim to be able to almost "see" the room. This is the kind of thing that the subconscious mind is constantly doing for us in the background, and what finally gets presented to our conscious mind is rather independent of what processing was required to achieve it. I admit that without at least a working knowledge of the literature, this may seem counterintuitive.

No this is not what sounds counterintuitive. It sounds counterintuitive that my brain would have to work harder on a lossy compressed signal than on the original. Also note that duff and you are not helping each other, because you say the brain needs to work harder to filter out the noise, and duff says the brain needs to work harder to somehow interpolate the signal and thus creating info. While I might be misunderstanding duff and I found your explanation more plausible, this still doesn't keep me from wondering why it would require more effort. In both cases, all the info must be processed and a decision must be made if it is passed through to my conscience or not. But for as long as both signals carry the same amount of info then the same processing "power" is required, independantly of what my consious receives.

Now I cannot say if a noisy signal would contain more info than a clean one (probably yes, but can we measure this? if as you say a lossless compressed wav is a reasonable guess of the amount of info in a signal then in the 3 tests I have done quickly now the recompressing the lossy files to lossless results in a smaller size than the original thus indicating a smaller amount of info), but all I am asking is some proof for these claims. When people start using phrases like "It goes without saying" and "uncontroversial" I always immediately start asking myself: "Is that so?". And since these claims are made so a hard I don't think it is unfair of me to ask for some sources that confirm this. Enfin...

Quote
opinions, and the free and lively interchange of ideas is what this board is all about. But I don't think it would hurt for a slightly more flexible attitude to prevail (and it would certainly reduce the possibility of many egg covered faces in the future).

Oh I'm flexible and I wasn't throwing with eggs, but I can't do much with mere assumptions if that is all they are. If you say I can't trust my intuition in this area, I have no reason to believe one assumption more than the other if neither is backed with any kind of proof. This has nothing to do with flexibility, but practicality.

Quote
Have a good weekend! 

You too. 
"We cannot win against obsession. They care, we don't. They win."

What's the problem with double-blind testing?

Reply #69
Quote
What's surprising is that this can be a relatively high level of noise (which is why these lossy codecs work so well). You can easily do a test where you subtract an original music signal from its encoded version. You might be surprised how much noise is added.

hmm, I dont think that is a fair demonstration. Reminds me of the distinction between what is a sound and what is a difference between sounds - is the difference between sounds a sound? Like our stereo perception between our ears perceives the difference between sounds in a totaly different way that we percieve sounds in each ear. This difference between the wav and the encoding although it can be represented [audibly rendered] as a sound, isnt really such a thing.

Then im wondering what its like to percieve a lossy representation of sound in one ear and the lossless version in the other, wonder if that has been tested? 
Quote
So, if we're adding noise to a signal that someone is trying to interpret, it's pretty obvious that it's going to require more processing.

Perhaps but its not a sure bet.
Quote
Do you find it easier to understand speech in a crowded bar or a quiet room? This is why they put up acoustic panels in auditoriums; too much reverberation acts like noise and makes speech more difficult to understand. This isn't really debatable.

That is about interference and masking, but lossy encoders developed performance is dependant on not damaging 'clarity' by estimating those considerations and keeping within ~at least~ 'concentratively imperceptable' limits.

Quote
What is in question is whether or not noise that is below the threshold where you can actually hear it still has an effect on processing, and what research is showing over and over is that whether or not something is directly perceived has very little to do with the underlying processing. ...  Just because other processing is unconscious does not mean it's any less taxing, and the brain will always attempt to minimize it.

I agree with alot of this. Probably no surprise 

A big problem for the forum with these possibilities, is they are the preferred retreat of the (usually unwitting) snake oil technology sellers - despite them being areas difficult to document by nature making instore demos that rely on them ridiculous.

I have this weird sensation sometimes when Im ~half asleep, where I feel sharp sounds from unexpected events like clangs and ticks as a wave of light travelling through my body. Ive wondered if that could be tranquil nerves in the body feeling the sound as it passes through them, or if it is just a kind of hallucination. Perception and conciousness are surely still bewildering and mysterious things.

Regards'
no conscience > no custom

What's the problem with double-blind testing?

Reply #70
Quote
It sounds counterintuitive that my brain would have to work harder on a lossy compressed signal than on the original. Also note that duff and you are not helping each other, because you say the brain needs to work harder to filter out the noise, and duff says the brain needs to work harder to somehow interpolate the signal and thus creating info.


Filtering and unconscious inference processes take place in the brain on all sinusoidal waves (light and sound). These processes are taxed by noise.

Quote
This is also what isn't debated. Audible differences lie within the scope of an ABX-test.


David's example addressed the contention that noisy stimuli aren't harder to process. Why would this principle change just because the noise isn't audible? Again, you need to separate the processing from the product of that processing (i.e., the percept).

Quote
When people start using phrases like "It goes without saying" and "uncontroversial" I always immediately start asking myself: "Is that so?".


Frankly, I find it surprising that I need to convince people that degraded signals require more effort to process. This is not the speculative element of the idea. But I understand that our use of the term "effort" is not one people ordinarily consider. I will provide examples of research demonstrating this soon.

What's the problem with double-blind testing?

Reply #71
Quote
Frankly, I find it surprising that I need to convince people that degraded signals require more effort to process. This is not the speculative element of the idea. But I understand that our use of the term "effort" is not one people ordinarily consider. I will provide examples of research demonstrating this soon.
[a href="index.php?act=findpost&pid=336405"][{POST_SNAPBACK}][/a]

Because they arent 'degraded' - they are changed and filtered. They arent 'messier', maybe they are notably artificial in respects but - if you are used to refering to the necessarily perceptibly subtle changes allowed by lossy encoding, with perjorative terms as degrade and elsewhere 'impoverished' you'll have a problem with seeing them fairly. That a perceptibly subtle change can be 'strong' or 'powerful' doesnt mean its 'noisy' just different. Im open to the idea there are unconcious differences in their perception, and maybe some unconcious stresses to watch out for in psychoacoustics, but different sound doesnt have to be messier sound.
no conscience > no custom

What's the problem with double-blind testing?

Reply #72
A simplistic example - a sine wave of exactly 3122.4873 Hz might be encoded as a sine of 3121 Hz. Loads of waves in a signal might have their accuracies reduced like this, some removed completely, some allowed to be a bit louder or quieter to fit a nicely compressable huffman block. Each sine is still smooth, even though the difference between the encode and the wave will sound like white noise, -there is no sound of white noise in the encode
no conscience > no custom

What's the problem with double-blind testing?

Reply #73
-Sorry for flooding a bit there. Tasty subject
no conscience > no custom

What's the problem with double-blind testing?

Reply #74
Quote
Quote
Frankly, I find it surprising that I need to convince people that degraded signals require more effort to process. This is not the speculative element of the idea. But I understand that our use of the term "effort" is not one people ordinarily consider. I will provide examples of research demonstrating this soon.
[a href="index.php?act=findpost&pid=336405"][{POST_SNAPBACK}][/a]

Because they arent 'degraded' - they are changed and filtered. They arent 'messier', maybe they are notably artificial in respects but - if you are used to refering to the necessarily perceptibly subtle changes allowed by lossy encoding, with perjorative terms as degrade and elsewhere 'impoverished' you'll have a problem with seeing them fairly. That a perceptibly subtle change can be 'strong' or 'powerful' doesnt mean its 'noisy' just different. Im open to the idea there are unconcious differences in their perception, and maybe some unconcious stresses to watch out for in psychoacoustics, but different sound doesnt have to be messier sound.
[a href="index.php?act=findpost&pid=336407"][{POST_SNAPBACK}][/a]


The reduction of resolution and introduction of noise into a signal, to my mind, qualifies as a degradation. That's what lossy encoders do. I use this in a technical sense, so I don't mean to disparage any lossy format with "pejorative" intentions! Bad mp3 file! Bad!