Someone is waving this in my face at another forum,after I asserted that people who can tell modern 320kbs LAME encodes from source in ABX constitute the a tiny minority of listeners, and even they may require 'killer samples' , rather than being able to do it all the time, e.g., with a random mix of music.
You talk about LAME. Let's look at that: http://wiki.hydrogenaudio.org/index.php?title=Lame (http://wiki.hydrogenaudio.org/index.php?title=Lame)
Here is a graph from that page:
(http://wiki.hydrogenaudio.org/images/2/2c/Lame-chart-2.png)
It shows that it *never* achieves transparency regardless of data rate.
I called bullsh*t on that rhetoric (e.g., on the very same wiki page, we see "-V0 (~245 kbps), -V1 (~225 kbps), -V2 (~190 kbps) or -V3 (~175 kbps) are recommended. These settings will normally produce transparent encoding (transparent = most people can't distinguish the MP3 from the original in an ABX blind test). Audible differences between these presets exist, but are rare. "
IMO, this graph is not evidence that 320 'never achieves transparency'. I know some people can ABX 320kbps. That is not the issue -- I have
never claimed 320 is transparent to EVERYONE, all the time. Just that it's a rare ability, typically requiring training to hear specific artifacts, and possibly requiring carefully-chosen samples to reveal them.
The graph also lacks error/variation bars -- where's from, btw?
Dude (who is a former colleague of JJ) refuses to come over here and argue his case. I told him I'd bring it here for him. What say you, LAME developers and users?
full post:
http://www.avsforum.com/avs-vb/showpost.ph...p;postcount=348 (http://www.avsforum.com/avs-vb/showpost.php?p=20864798&postcount=348)
Either you, or the guys that told you about that graph missed the point.
The graph has one single use: To describe that when increasing the bitrate, the quality increases in smaller steps.
Or said it in other words: Doubling the bitrate does not mean doubling the quality.
Also, nowhere in the graph talks about transparency. It talks about quality.
Except hybrid codecs, or codecs designed specifically for such thing (which are rare), lossy codecs are not lossless at their highest setting.
As such, quality can't be 100% with any setting.
If you want to add there the transparency point, it would be somewhere around 8, but killer samples tend to disagree.
Addenum: Note that the graph is not based on math. The graph could be considered as "the average/usual case". That's another reason not to make more conclusions than those mentioned.
So, what is the definition of 'quality' being used in the context of that graph? Numerical identity with source? Likelihood of being ABX-able from source?
(and where's the graph from originally?)
So, what is the definition of 'quality' being used in the context of that graph? Numerical identity with source? Likelihood of being ABX-able from source?
How many arbitrary units of quality the file has.
IMO, this graph is not evidence that 320 'never achieves transparency'.
Completely agree. The graph is missing at least 4 important things:
- Confidence intervals around all data points
- Mean and confidence interval of the hidden reference, since most of the time that reference does not end up getting quality 10!
- a description of what exactly is meant by "perceived listening quality", i.e. the y-axis label
- a description of the test material used to obtain those quality results.
As such, quality can't be 100% with any setting.
Why not? Full
perceptual transparency for every listener in the universe implies 100% quality.
If you want to add there the transparency point, it would be somewhere around 8...
Why? Or rather, were did you read that? In MUSHRA tests, 80% and above means "excellent/broadcast quality". Transparency, though, is when the confidence intervals of a codec and the hidden reference overlap. That's the reason why the hidden reference is needed in that plot.
Chris
Also, nowhere in the graph talks about transparency. It talks about quality.
More specifically, it says "perceived listening quality". As such it is not unreasonable to assume from the graph that mp3 never reaches transparency.
Also, nowhere in the graph talks about transparency. It talks about quality.
More specifically, it says "perceived listening quality". As such it is not unreasonable to assume from the graph that mp3 never reaches transparency.
In which case I wonder if the labeling of the graph isn't just misleading. What is meant by 'quality' on that graph, not to mention 'transparency'? What does '10' represent on the graph? Are these not aggregate results of ABC/hr tests? Surely those points have confidence intervals associated with them?
Where does this graph come from??
Exactly what I said.
Someone called Jan: http://wiki.hydrogenaudio.org/index.php?ti...ame-chart-2.png (http://wiki.hydrogenaudio.org/index.php?title=File:Lame-chart-2.png)
Chris
Where does this graph come from??
http://www.hydrogenaudio.org/forums/index....2288&st=100 (http://www.hydrogenaudio.org/forums/index.php?showtopic=32288&st=100)
apparently this, to be precise
http://www.hydrogenaudio.org/forums/index....st&p=329974 (http://www.hydrogenaudio.org/forums/index.php?showtopic=32288&view=findpost&p=329974)
I suppose I'll have to read that whole thread to find out why in hell
this was done:
Arbitrarily, I'd give the following quality levels:
--abr 56: 3
--abr 90: 5
-V5: 7
-V4: 8
-V3:8.5
-V2: 8.7
-V0: 9.1
-b 320: 9.2
and what 'XLS' is.
And why the HA wiki for LAME is using a cryptic graph about an old version of the codec, to inform the public. (Don't worry, I know the answer -- ITS A WIKI, IF YOU DON'T LIKE IT, FIX IT)
Yeah, it seems at best rather useless and at worst misleading. If it’s trying to convey that there are diminishing returns of perceived quality with increasing bitrate, that seems much better—and easier—expressed with words. Hey, I just did it!
And why the HA wiki for LAME is using a cryptic graph about an old version of the codec, to inform the public. (Don't worry, I know the answer -- ITS A WIKI, IF YOU DON'T LIKE IT, FIX IT)
I’ll happily remove it if that’s the general desire.
and what 'XLS' is
MS Excel spreadsheet
and what 'XLS' is
MS Excel spreadsheet
LOL. I should've seen that coming.
OK, whose? (hovers over link) ..ok, it's Synthetic Soul's, but I can't locate the original context in which he posted the graph. The plot thickens...
Dude still won't post here, but does have this to say:
The graph only has one meaning. That increasing bit rate never achieves transparency.
Is the objection limited solely to a somewhat misleading graph and a blatant misinterpretation thereof, or is amirm railing against MP3 using a line of reasoning that has not been subjected to a double-blind test? Confidence of pronouncement often varies inversely with willingness to proffer evidence, something that should be abundant and close to hand if the claimant is correct. Ironic!
The graph only has one meaning. That increasing bit rate never achieves transparency.
That is the sort of thing a trial lawyer or a fundamentalist would say. No point arguing, he's hooked up on a reading of an old and imprecise graph, and won't change what he says.
Or, as they say, can't tell if troll or really stupid.
Is the objection limited solely to a somewhat misleading graph and a blatant misinterpretation thereof, or is amirm railing against MP3 using a line of reasoning that has not been subjected to a double-blind test? Confidence of pronouncement often varies inversely with willingness to proffer evidence, something that should be abundant and close to hand if the claimant is correct. Ironic!
Well, he also now says I'm not conveying his beliefs correctly, but here goes (feel free to wade through that thread if you want to):
Apparently (to me) he believes I claim LAME 320kbps CBR or 192kbps VBR mp3s are transparent
universally (to everyone, all the time). Apparently he thinks the graph shows mp3s can
never be transparent. You see the artificial gulf there?
First belief: simply wrong. I never claimed that anywhere, and never would. Saying 100 random listeners using random tracks would probably fail an ABX of 320kbps vs source isn't saying that everyone always would. It's just saying 'that's how damn good LAME/320kbps is'
Second belief: hinges on the strictest definition of 'transparent', and puts a burden on that graph that it wasn't meant to bear. Misses the forest for the trees.
IMHO. He thinks you guys are beating me up here, so what do I know? ;>
IMHO. He thinks you guys are beating me up here, so what do I know? ;>
amirm failed spectacularly at comprehending that example chart. That makes him at least clueless about codec testing, and probably stupid as well since its not really that hard to understand if you can use words and understand ideas. (my apologies if he simply does not speak good english and so could not read the chart properly) Hes arguing about codec testing. You know hes ignorant about codec testing. Why are you even having an argument with him?
Point out hes too ignorant to have useful ideas about this and move on.
What is the definition of "transparent"?
I am probably incorrect, so please inform me if so.
I was thinking about the definition of transparency as well. I assumed "transparency" meant something like a lossy encode being audibly equivalent to a lossless encode, according to human listeners. on the other hand, I also assumed "perceived listening quality" is how good it sounds according to human listeners.
in either case of what I thought, they were basically the same thing: an opinion of the listener.
Which listener(s)? What source material? How are they compared?
I think by full 'Quality' on the graph they mean that it will never be bit exact quality from the source file. You never will be guaranteed that it will be lossless even if you pump up the bitrate to 320Kbit/s.
Transparent might be the wrong word. Transparent means that is 'sounds' transparent to the user.
Even though 192Kbit/s or even 320Kbit/s Vorbis sounds 'transparent' to me doesn't mean that is is full 'Quality'.
The graph probably wants to show that Lossy != Lossless.
The graph probably wants to show that Lossy != Lossless.
The graph wants to show that quality and bitrate (or "quality setting") do not increase in perfect tandem with eachother, as stated earlier in this very thread. This is hardly news for anyone who's ever compressed any sort of media file.
I think by full 'Quality' on the graph they mean that it will never be bit exact quality from the source file.
...not if one is to interpret the legend literally!
As I said already:
it says "perceived listening quality"
The legend is blatantly incorrect, of course.
Regarding what the other guy is saying from the other forum, who cares. He can either come here and argue or you guys can go there. I do not think it is appropriate for us host an argument by proxy.
Hey, I agree argument by proxy is lame. Amir (who claims to be an EXPERT on lossy codecs, via his Microsoft/JJ connection) offered up the HA wiki graph as proof of his claim. I told him he really should check his interpretation with the folks who actually sponsor the page, because I thought he was misinterpreting it and I thought the graph itself had some problems, like the lack of 'error' bars (not expecting him to just take my word for it). Anyway, he's got a link to this thread, I hope he's reading along and learning from it.
As for transparency, meaning of, HA's own knowledgebase has this to say:
http://wiki.hydrogenaudio.org/index.php?title=Transparency (http://wiki.hydrogenaudio.org/index.php?title=Transparency)
Clearly this is someone who believes in the validity of whatever supports his own personal beliefs. If this graph had been drawn a little differently he would have argued vigorously how worthless it was, probably using many of the same points that have been made here.
I think by full 'Quality' on the graph they mean that it will never be bit exact quality from the source file. You never will be guaranteed that it will be lossless even if you pump up the bitrate to 320Kbit/s.
Transparent might be the wrong word. Transparent means that is 'sounds' transparent to the user.
Even though 192Kbit/s or even 320Kbit/s Vorbis sounds 'transparent' to me doesn't mean that is is full 'Quality'.
The way you put it, it's wrong. Quality is
always a subjective term that can or can not be determined by objective parameters. Only if such a determination has been established, one can talk of "quality" as an objective term. Now if you want to compare a signals perceived quality to that a reference signal, you can safely assume that, if the signal is identical to the reference signal, the quality is also potentially identical (given that you listen to it exactly the same way). Thus lossless encoding can be said to have a "quality" of 100%, or in different words, to be transparent. Now if we leave away parts of the original signal, its quality potentially degrades, but we can't say exactly in which way, because of the unknown (infinitely complex) relation between our perception and the data-reducing (psycho-acoustical) algorithmns involved.
So we have to test empirically. That's how those algorithms are established and that's how we compare their outcome. We can do an ABX test if we want to check for transparency, or simply an AB test if we want to compare for quality. If such a test has been properly conducted and the number of trials and individual participants is large enough, we may draw relations like those (improperly) shown on this graph. Such a graph could very well show a quality of 100% (or 10 in this case) for a lossy encode. This relation however, is as solid as is the emprical data behind it. Strictly speaking, it's never possible to empirically prove (http://en.wikipedia.org/wiki/Problem_of_induction) that it would be transparent for
every individual under
any circumstance, but if our data is good enough we can assume it with pretty much the same confidence we have in the sun to rise again tomorrow.
What is the definition of "transparent"?
Assuming you do not intend this question to be rhetorical, I would suggest a definition of "demonstrable over the course of multiple double-blind listening tests to be indistinguishable from the uncompressed (or losslessly-compressed) source with a statistical signifcance of p<0.05".
Perhaps I'm completely ignorant about p values, but I thought they are used to demonstrate the probability that someone has had a successful outcome without guessing. It is also my understanding that when someone cannot distinguish a difference and is left to guessing, the p value increases approaching 1. Based on this, saying someone is guessing with p<0.05 doesn't make any sense.
Can someone enlighten me?
Yep, the p-value is the probability that the observed result could otherwise have been obtained by chance. A result over the chosen significance level (usually 0.05) leads one to (not accept, but) fail to reject the null hypothesis, which usually proposes no effect of whichever treatment is being investigated.
I too am not sure how one can demonstrate that there is not an audible difference. One cannot confirm the null hypothesis (i.e. prove absence of an effect), only fail to reject it. Thus, transparency could be argued if multiple users failed to achieve statistically significant p-values, but not the converse.
What is the definition of "transparent"?
Assuming you do not intend this question to be rhetorical, I would suggest a definition of "demonstrable over the course of multiple double-blind listening tests to be indistinguishable from the uncompressed (or losslessly-compressed) source with a statistical signifcance of p<0.05".
It was not rhetorical. I have read the HA wiki and Wikipedia. Neither your proposed definition nor either of these articles tells me precisely what listener we're talking about. The HA article suggests it is fair to use killer samples to demonstrate non-transparency. The graph we're discussing reportedly used conventional material. The lack of error bars are a problem but, the graph does appear to make a case for a lack of "transparency" according the definitions I've read.
I have read the HA wiki and Wikipedia. Neither your [Zarggg] proposed definition nor either of these articles tells me precisely what listener we're talking about.
Those discussing transparency usually either refer to the majority (a problematic concept in itself as it usually refers to fairly tech-savvy listeners, but we'll have to make do until Big Brother mandates listening tests) or to the specific listener being addressed in a given discussion. Since you seem to have been referring specifically to this by krabapple . . .
[. . .] Apparently he thinks the graph shows mp3s can never be transparent. You see the artificial gulf there? [. . .] [This] hinges on the strictest definition of 'transparent', and puts a burden on that graph that it wasn't meant to bear. Misses the forest for the trees.
. . . I will agree that it seem to be implying some objective definition of transparency, which is obviously a non-starter, as is made plain on the Knowledgebase page:
Transparency, like sound quality, is subjective.
But I do not understand your question about the listener(s), and krabapple is correct to point out the nonsensicality of amirm saying that MP3 can never be transparent. It remains to be answered whether the latter is simply questioning the graph, which has already been discredited; or is lambasting the format overall as being insufficient for every possible use-case, in which event he is blatantly incorrect.
I think the wiki article on "transparent" is pretty good. With all those caveats, something is transparent for you if you can't hear the difference (in a double blind test - but that goes without saying ).
But more generally, the things we accept as transparent are things where reports of hearing a difference in a double-blind test are rare and uncorroborated.
I think far longer has now been spent discussing that graph than went into making it. It would probably have been better if the quality scale had no units. I'm not aware of any reliable units of subjective audio quality which have a globally understood and repeatable quantity in the same way as the units on a ruler.
Cheers,
David.
I think far longer has now been spent discussing that graph than went into making it. It would probably have been better if the quality scale had no units. I'm not aware of any reliable units of subjective audio quality which have a globally understood and repeatable quantity in the same way as the units on a ruler.
That strikes me as throwing the baby out with the bathwater. The scale on a MUSHRA test is calibrated by its anchors. We're always going to have uncertainty in these results so the ruler is not a good model.
The original question here even before the graph entered the conversation was, "Is 320 kb LAME transparent?" So long as we can agree on a technical definition of "transparent" it seems like a simple experiment to conduct. The graph indicates that it has been attempted and the unverified results indicate that it is not transparent.
But, I'm getting the sense that there is no technical definition for transparent; That it's a "subjective" thing with context-dependent meaning. The HA article, for instance, describes how to demonstrate non-transparency then lists some transparent audio formats that I'm pretty sure have been shown to be non-transparent by these guidelines at least with specific program material and ears.
The HA article, for instance, describes how to demonstrate non-transparency then lists some transparent audio formats that I'm pretty sure have been shown to be non-transparent by these guidelines at least with specific program material and ears.
It lists formats and bitrates at which they may be artifact free.... It does not state that they
are transparent at those bitrates for all material.
A point about the article - the statement "use double-blind tests where the high anchor is the original uncompressed audio" infers that compressed audio should not be used. Why would the original audio compressed in FLAC be unsuitable? Surely either "uncompressed" should be removed or changed to "lossless" or "unprocessed" or something like that.
"Is 320 kb LAME transparent?" So long as we can agree on a technical definition of "transparent" it seems like a simple experiment to conduct.
There are (just need to search hydrogenaudio) killer samples that can be quite easily ABXed between MP3 at 320kbps and original.
One could say that this is a proof that MP3 is not transparent at 320kbps.
Yet, that would be a pretty bad definition of transparent, especially since we are talking about lossy codecs, and I explain myself:
If I get a single file which is encoded with a lossless codec, and the audio portion is not bit-for-bit identical to the original, I can say that the encoder is buggy/is not working as expected/its quality is not 100%.
If I get a single file which is encoded with a lossy codec at its highest setting, and I am able to ABX it against the original, I can only say that the file is a killer sample for this encoder and version, which might or might not be a killer sample for the format itself, and I will have to test if other lossy codecs do a better job or also fail on it.
I cannot imply that the format or encoder is not transparent. I can only say that there is a sample which is not transparent, but transparency with a lossy codec implies what happens in the general case.
Said it in other words: there are many files where one would fail to ABX an MP3 at 320kbps versus its original. Since this number is empirically bigger than the cases where it can be ABXed, this setting is considered transparent for the general case.
At last, there is no way to demonstrate transparency, other than demonstrating that the file is bit-for-bit identical (In which case, the transparency is a logical consequence of the files being the same, not the opposite).
As said above, one can only demonstrate a difference using an ABX test. A non-difference cannot be demonstrated with this method.
<sigh>
A link to the discussion was given regarding the origin of that graph. Who bothered to read it?
Please allow me to repost (bold type added by me):
What about a perceived quality vs size graph?
Arbitrarily, I'd give the following quality levels:
--abr 56: 3
--abr 90: 5
-V5: 7
-V4: 8
-V3:8.5
-V2: 8.7
-V0: 9.1
-b 320: 9.2
This is purely informal, but if you trace a graph of perceived quality vs average size, you will probably obtain a nice curve with valuable indication regarding efficiency of the settings.
As for the -V1 datapoint:
I added -V1 in there, to get the unsightly kink out of the first graph (qualily value is simply the average between -V2 and -V0).
IOW, these numbers are
not based on scientific testing. That we need to somehow go down the rabbit hole justifying the unjustifiable by attempting to (re?)define the word transparent is deserving of ridicule.
Throwing out the baby with the bath water? There is no baby to throw.
In regards to the OP, what is the motive for slamming LAME anyway? I didn't read this "expert's" thread because hearing FUD about a codec gets... lame, pun intended I guess. Is he/she pimping another MP3 encoder, or another lossy format? I don't get the motivation for using the graph as evidence when it is clearly a loose (a very, very, loose) representation of bitrate-to-quality tradeoff.
I really don't see a vast number of HA users- the ones that encode to MP3- using LAME because of its known inferior design of no-transparency, duh
As other users already suggested, he/she should do some ABX, or better yet participate in one of the public listening tests and just see how much (or little) a contribution they can give.
Sorry about reader fatigue and sloppy research.
I find this topic interesting because the religion here at HA is that modern first-generation lossy encoding sounds the same as the original. "Transparent" is commonly used to describe this feature. And yet we have many audio lovers (including HA members) who insist on using lossless formats. And why is that? Because lossless gives them peace of mind? Because they think lossless sounds better? Or is it because perceptual coding, no matter the sophistication or bitrate, potentially introduces significant and audible degradation?
I think the main reason that many of us here at HA use lossless encoding is to avoid generational degradation. If I thought that I would never need to reencode to a different format EVER, then I might be perfectly happy with a high quality lossy encode for my collection.
Ignore killer samples and golden ears, and instead think of transparency of referring to general usage and general users. Does the problem now magically evaporate?
Transparency is subjective in the sense that it is limited to certain user(s) by definition, but this is a much better kind of subjective than the things ToS8 exists to avoid. Is there any point in quibbling over semantics and lamenting the fact that one can never objectively describe the quality of representation provided by a lossy codec?
And yet we have many audio lovers (including HA members) who insist on using lossless formats. And why is that? Because lossless gives them peace of mind? Because they think lossless sounds better? Or is it because perceptual coding, no matter the sophistication or bitrate, potentially introduces significant and audible degradation?
I think the main reason that many of us here at HA use lossless encoding is to avoid generational degradation.
When dealing with master tracks lossy is a non-option. In that last regard MP3 could be considered ephemeral and for casual listening but never for archival.
Not to throw the thread out of orbit, I think there might a benefit to defining terms for "transparency" in regards to the degree of "lossy quality." Of course, users' ears will differ and just as much as users' music genres/preferences will differ: i.e. classical vs. pop rock will have different average bitrates at -V2. Artifacts are likely to surface in lossy but how annoying they are and what the listener considers annoying (pre-echo vs. warbling) is also another area of opinion.
I have gone ahead and updated the caption for the graph in the wiki, and linked to the relevant discussions.
Old caption: Here a trial to see how the perceived listening quality improves with settings/averaged filesize
New caption: This informal graph shows how LAME's highest quality settings result in progressively larger file sizes, but yield relatively smaller gains in perceived listening quality.
I think there might a benefit to defining terms for "transparency" in regards to the degree of "lossy quality."
For any particular individual at any given time, something is either transparent or it isn't. Within these constraints there are no shades of gray.
I have gone ahead and updated the caption for the graph in the wiki, and linked to the relevant discussions.
Do away with the scale and data points and I might buy into the idea of there being a graph, otherwise dump it altogether.
"Transparent" is a word, and words have different kinds of usage.
I take it that, in a technically defined sense, "transparent" means that in a particular (series of) test(s), particular listener(s) were unable to tell the difference between specific lossily-coded sample(s) and their original(s). Even in this defined sense, transparency can't be proved, since individual listeners can have good days and bad days.
As a word in general conversation, it seems to often be used in a way that relates to, but is not the same as, the technical sense. So in answer to a question like, "At what setting does LAME get to be transparent?" (which in some contexts would be a sensible enough practical question) you might say "V 3," meaning "V 3 for most people with most material most of the time." On HA people are careful to include the qualifiers, being conscious of the technical sense; in a non-technical setting, in which there's a danger of good advice being disregarded because of the MEGO factor.
IME, we are all likely to slip into the trap of treating ordinary-language words as though they were explicitly defined technical terms.
I don't ever recall there being a problem with the meaning of the word as it relates to how it is used on this forum.
transparent = perceptually indistinguishable
I take it that, in a technically defined sense, "transparent" means that in a particular (series of) test(s), particular listener(s) were unable to tell the difference between specific lossily-coded sample(s) and their original(s). Even in this defined sense, transparency can't be proved, since individual listeners can have good days and bad days.
I would be tempted to read a corollary from this:
You will always be able to find someone, somewhere, somehow who will experience perceptible degradation with any lossy coding system.
So transparency depends on the listener's abilities including listening conditions and the sample(s) he listens to. It's personal.
A practical definition of a codec's transparency could mean that there was no listener so far who was able to successfully ABX a sample. Sure the quality of this kind of transparency depends heavily on the number of listeners (and their ABXing abilities) trying hard to find non-transparent (in the personal sense above) samples.
As for the graph simple words about the diminishing returns fact would be more honest, but people like graphics. It's a good exercise never to simply beleive in graphics. People can be easily fooled by graphics (for instance by using a suggestive scale) even when the graphics is formally correct.
In our case it's clear that the quality scale can't have a real meaning and is chosen arbitrarily at the author's will to demonstrate the diminishing returns phenomenon.
I don't ever recall there being a problem with the meaning of the word as it relates to how it is used on this forum.
transparent = perceptually indistinguishable
I didn't say there was a problem here; in fact, if you were actually to read things, you would see I explicitly made an exception for here. There is a problem elsewhere.
BTW, perceptually indistinguishable to whom, under what circumstances, always, sometimes?
BTW, perceptually indistinguishable to whom, under what circumstances, always, sometimes?
To quote your previous post:
"transparent" means that in a particular (series of) test(s), particular listener(s) were unable to tell the difference between specific lossily-coded sample(s) and their original(s).
That or in the general sense: a given setting producing lossy encodes indistinguishable from their sources for most material, as hopefully represented in a subset; when listened to by most individuals, as hopefully represented by the cross-section of users participating in the sum total of listening tests previously performed on that setting.
I mean, you specifically qualified both of these, so are we just nit-picking for its own sake now? All this really shows is that both listening and language are subjective. So? I would say that Hydrogenaudio and all other similarly evidence-based initiatives are doing a pretty good job despite these inherent and ineradicable limitations, if you want to call them that.
Nitpicking indeed!
I'm being asked to qualify something in a post with information I provided only two posts prior? It's also being suggested that I'm having difficulty with reading comprehension? Now that would be pretty rich.
A practical definition of a codec's transparency could mean that there was no listener so far who was able to successfully ABX a sample. Sure the quality of this kind of transparency depends heavily on the number of listeners (and their ABXing abilities) trying hard to find non-transparent (in the personal sense above) samples.
You'd probably also want to put some constraints on the test material. I believe it is easy to contrive an artificial "killer sample" that would crowbar an encoder.
This graph really, really needs to go! A graph with an axis labeled "quality" with nothing more than a single developer's whimsical numerical evaluations of quality as data, along with the sizes of one album for file size? This is Hydrogenaudio. We should really hold ourselves to a higher standard. I know it's informal, but I've just had a big argument with someone over on another forum regarding this bloody graph. With all due respect to Gabriel's numbers, I still think we need to pull it. I'm gonna go ahead and do just that. If anyone feels the need to revert it, please do so, but I'd like to hear reasons why you think it should stay. The graph is at odds with everything that I value in this community.
Edit: Also, I'm moving this to Wiki Discussion.
My perceived quality is that v0 is equal to b320.
For some (maybe less than 1%) audio files I can ABX v0 vs lossless or b320 vs lossless, but I can never ABX v0 vs b320.
I would like to listen to some killer samples showing that b320 is better than v0.
Do you mind removing that graph from the site? People are still able to link and cite the graph as fact elsewhere, and I'm really sick of seeing it.
The graph was already taken down. Are you suggesting that someone removes it from the discussion that was linked? If so, this is not going to happen.