Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Saving music from destruction: upsampling (Read 29573 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Saving music from destruction: upsampling

Reply #25
The benefits of a dedicated yellow subpixel are dubious.


Whoa, hold the phone there.  If you look at the color rendering space of standard RGB, it can not reach into some of the "yellow" colors. It's due simply to the spectra of the various emitters and the sensitivity of your eye.

Now, in order for it to work, you do need the information sent from one end to the other, and therein is a problem, but the benefits of having a dedicated yellow phosphor that can fill in the rest of the color space (well, except for some very far indigo and red, which are also missing in standard RGB, but they aren't common in scenes either, unlike yellow) do exist IF you have the information to know when to use them.
-----
J. D. (jj) Johnston

Saving music from destruction: upsampling

Reply #26
Whoa, hold the phone there.  If you look at the color rendering space of standard RGB, it can not reach into some of the "yellow" colors. It's due simply to the spectra of the various emitters and the sensitivity of your eye.

Now, in order for it to work, you do need the information sent from one end to the other, and therein is a problem, but the benefits of having a dedicated yellow phosphor that can fill in the rest of the color space (well, except for some very far indigo and red, which are also missing in standard RGB, but they aren't common in scenes either, unlike yellow) do exist IF you have the information to know when to use them.

A consumer television is not marketed for the science lab, but for showing pictures off of optical media, tv broadcasts, etc. I am very sceptical about the current value of a "yellow" pixel in that context.


-k

Saving music from destruction: upsampling

Reply #27
You do understand that the human eye has no yellow receptors? The spectral response of the red and green receptors overlaps, so "yellow" light stimulates both, which your brain interprets as yellow. Your brain gets exactly the same response if the eye is stimulated by red and green together.

Saving music from destruction: upsampling

Reply #28
The benefits of a dedicated yellow subpixel are dubious.


Whoa, hold the phone there.  If you look at the color rendering space of standard RGB, it can not reach into some of the "yellow" colors. It's due simply to the spectra of the various emitters and the sensitivity of your eye.

Now, in order for it to work, you do need the information sent from one end to the other, and therein is a problem, but the benefits of having a dedicated yellow phosphor that can fill in the rest of the color space (well, except for some very far indigo and red, which are also missing in standard RGB, but they aren't common in scenes either, unlike yellow) do exist IF you have the information to know when to use them.


Yes, but then we're heading into the same territory as 16 vs 24 bit, and 44.1 vs 96KHz, aren't we?

But truth be told, I currently don't have an LCD or similar modern TV (this ye olde CRT does just fine; I watch little TV), and the only problems I'm aware of when I see someone else's huge flat TV are hyper-saturation and completely overdriven sharpness. I don't think the subtleties of alleged better yellow detail will be apparent under such circumstances.

Saving music from destruction: upsampling

Reply #29
You do understand that the human eye has no yellow receptors? The spectral response of the red and green receptors overlaps, so "yellow" light stimulates both, which your brain interprets as yellow. Your brain gets exactly the same response if the eye is stimulated by red and green together.

The tri-stimulus that have been defined as the "official" human response cannot be recreated by most display/printer technology, and certainly not by the common transmission standards. I dont know the exact reason, but it could be that it is hard to design spectral filters with the exact required response - or that the filters that are sharp enough will filter out too much light, resulting in reduced max brightness and power efficiency. I think

Any mechanism that allows a better coverage of the CIE space could/should lead to some improvement in perception - if it is implemented end-to-end.



-k

Saving music from destruction: upsampling

Reply #30
Sharp Quattron review. Ouch.

Quote
The unfortunate truth is that widened colour gamuts have almost no real-world use in consumer TVs.
"I hear it when I see it."


Saving music from destruction: upsampling

Reply #32
You do understand that the human eye has no yellow receptors? The spectral response of the red and green receptors overlaps, so "yellow" light stimulates both, which your brain interprets as yellow. Your brain gets exactly the same response if the eye is stimulated by red and green together.



Yes, now you do see the color space plotted below,yes?
-----
J. D. (jj) Johnston

Saving music from destruction: upsampling

Reply #33
All I'm saying is that since the human eye has only three kinds of color sensors, it should be possible to combine three monochromatic light sources to produce any color that the human eye can perceive. If your light sources are not monochromatic then there may be some advantage in using more light sources in order to overcome their limitations.

Saving music from destruction: upsampling

Reply #34
I know this isn't audio-related, and as such not directly subject to TOS#8, but I no longer find graphs all that compelling, especially when my RGB monitor can't even display them.

Saving music from destruction: upsampling

Reply #35
It seems that there seems to be some confusion that because the Eye "only sees 3 colors" you can represent any color w/ any 3 other colors.  This would be true in the conditions that you had very well chosen "other 3 colors" and that they eye only sees 3 colors.  However, that interpretation of the eye is rather fundamental and naive.  The eye does have 3 different types of chromatic sensors, but these are not just some sort of trivial "it tells you how much red a given color has" sensors.  Each sensor will output a value for any given input, and these values vary based on the input light frequency.  Shown in this graph are these response curves, color coded to what you typically see people calling the Red, Green, and Blue sensors as defined in the CIE standard (what is most commonly believed to be the most accurate description of color).



If you have these response curves, you can treat any color as a point in 3-space in terms of these stimulus values (aka, "Tristimulus values"),  When plotted in 3-Space, the resulting volume is all visible color.  Because this is hard to represent graphically and not frequently needed, color gamuts were introduced to give a 2 Dimensional representation of a color space's expressible range (a color space being the full range of all colors displayable by a device).  These basically take a slice through that 3D volume of color in such a way that makes all colors in that plane have the same brightness, so only chromaticity is displayed.  The pictures posted above are graphical representations of 2 color gamuts.  The outer, horseshoe shaped one (which really can't be colored correctly on a monitor, or even in RGB space itself) is the CIE color space, or, all colors that the normal human eye can see.

edit: since your monitor can't actually display many of the colors, a more accurate representation of this gamut may be this:



where the gray is colors at this brightness that your eyes can see, and what your monitor can display is what's in the triangle.

If your color space is defined by a linear combination of 3 colors (i.e. R + B + G), then the entire color gamut can be drawn as a triangle with vertices at the chosen 3 colors.  The internal triangle in the above posted color gamut is that of the RGB color space used to express colors on digital displays (note here that individual displays will actually color things differently and that most displays can only draw 252k of the 16.7M colors described by the integer 0-256 RBG scale).  Note here how the RGB color gamut is a triangle because it's a linear combination of 3 colors, where the CIE gamut is much different shaped because it has much more specific combinations of colors (via its color matching functions that aren't linear).

Hopefully by now it's clear why digital displays (or any display or printer for that matter to date) doesn't display the full range of chromaticity that our eye can pick up.  So where does this yellow come in?  What if your color space is a linear combination of 4 colors instead of 3?  Your color gamut would be the smallest convex shape that encloses all of those colors.  It's obvious here that adding another color outside of your RGB color gamut can expand the amount of colors displayable.  So, why yellow? If you'll look at the response curves of the color matching functions, you'll see that our eyes are particularly sensitive to yellow, making yellow an easy color to use here.

All this being said, there still is very little use for a yellow primary color in your display.  Why?  Because your source material is still that of either PC RGB or NTSC RGB, etc.  Despite the display's ability to display a wider range of colors, no newer colors are actually being told to be drawn.  I guess there may be some tiny advantage when using a color space not displayed by a normal TV, like, you might get a few more colors out of PC RGB compared to a TV that only draws NTSC RGB (but I don't actually know if these two are different, so it may very well be a completely moot point).  All in all, yes, the display can display more colors that ARE seeable by the human eye, however, in the end, it doesn't actually matter if you have no source material using those colors.

I may not be very good at explaining this, or if you want to read more, Wikipedia has good articles about this in the CIE color space page and the Color Gamut page.

Saving music from destruction: upsampling

Reply #36
If you have these response curves, you can treat any color as a point in 3-space in terms of these stimulus values (aka, "Tristimulus values"),  When plotted in 3-Space, the resulting volume is all visible color.  Because this is hard to represent graphically and not frequently needed, color gamuts were introduced to give a 2 Dimensional representation of a color space's expressible range (a color space being the full range of all colors displayable by a device).  These basically take a slice through that 3D volume of color in such a way that makes all colors in that plane have the same brightness, so only chromaticity is displayed.  The pictures posted above are graphical representations of 2 color gamuts.  The outer, horseshoe shaped one (which really can't be colored correctly on a monitor, or even in RGB space itself) is the CIE color space, or, all colors that the normal human eye can see.

Is it not true that the CIE definition necessitates "negative" power in order to be physically realizable using 3 primaries in both camera and display?

My interpretation of this curve (although I think it is a bit hard to analyze without having bothered to understand the mathematics) is that one would need an infinite amount of "primaries" to be lineary added to each other in order to strictly capture and reproduce every single color nuance that the HVS can distinguish, but that triangles are good enough (partially because very saturated colors are uncommon in nature, and because not having them does not cause to much annoyance).
Quote
All this being said, there still is very little use for a yellow primary color in your display.  Why?  Because your source material is still that of either PC RGB or NTSC RGB, etc.  Despite the display's ability to display a wider range of colors, no newer colors are actually being told to be drawn.  I guess there may be some tiny advantage when using a color space not displayed by a normal TV, like, you might get a few more colors out of PC RGB compared to a TV that only draws NTSC RGB (but I don't actually know if these two are different, so it may very well be a completely moot point).  All in all, yes, the display can display more colors that ARE seeable by the human eye, however, in the end, it doesn't actually matter if you have no source material using those colors.

If you are showing still-images from your PC that has been captured in the native camera space and have profiled your screen, then the image rendering software has all the information it needs to do a good color reproduction - potentially better than sRGB (limited by camera and screen, but not arbitrary distribution standards).

I believe that certain home-video cameras support avchd and "deep color" through HDMI.

Someone wants Bluray + HDMI to be the killer color app. But I think that the combination of available content and available standards means that we are not there yet.

-k

Saving music from destruction: upsampling

Reply #37
Is it not true that the CIE definition necessitates "negative" power in order to be physically realizable using 3 primaries in both camera and display?

My interpretation of this curve (although I think it is a bit hard to analyze without having bothered to understand the mathematics) is that one would need an infinite amount of "primaries" to be lineary added to each other in order to strictly capture and reproduce every single color nuance that the HVS can distinguish, but that triangles are good enough (partially because very saturated colors are uncommon in nature, and because not having them does not cause to much annoyance).


If you pick any 3 points and just make an affine (0-1 as coefficients) combination, you can only get a triangle, if you include negative amounts of some of the colors, then you can reference any point in that plane.  So to use an affine combination of colors to describe all of CIE, you would either need to reference colors that we cannot see, or to use every border color (an infinite amount).  But yes, the sRGB color space does tend to give enough colors to be reasonable (I don't know the exact reasoning why).

If you are showing still-images from your PC that has been captured in the native camera space and have profiled your screen, then the image rendering software has all the information it needs to do a good color reproduction - potentially better than sRGB (limited by camera and screen, but not arbitrary distribution standards).

I believe that certain home-video cameras support avchd and "deep color" through HDMI.

Someone wants Bluray + HDMI to be the killer color app. But I think that the combination of available content and available standards means that we are not there yet.

-k


I didn't realize HDMI could transport a larger than "expected" color space.  If that's the case, then I guess it is possible to make use of the larger color gamut, but it still won't be present in TV, DVDs/BluRay, or games

Saving music from destruction: upsampling

Reply #38
I didn't realize HDMI could transport a larger than "expected" color space.  If that's the case, then I guess it is possible to make use of the larger color gamut, but it still won't be present in TV, DVDs/BluRay, or games

HDMI 1.3 and later supports "Deep color" and XvYCC
http://en.wikipedia.org/wiki/HDMI#Version_1.4
Quote
Deep color is a term used to describe a gamut comprising a billion or more colors

Quote
In a paper published by Society for Information Display in 2006, the authors mapped the 769 colors in the Munsell Color Cascade to the BT.709 space and to the xvYCC space. 55% of the Munsell colors could be mapped to the sRGB gamut, but 100% of those colors could map to the xvYCC gamut.[4] Deeper hues can be created - for example a deeper red by giving the opposing color (cyan) a negative coefficient.
...
xvYCC is not supported by DVD-Video or Blu-ray, but is supported by the high-definition recording format AVCHD and PlayStation 3.

Saving music from destruction: upsampling

Reply #39
Quote
xvYCC is not supported by DVD-Video or Blu-ray, but is supported by the high-definition recording format AVCHD and PlayStation 3.
That's not really true, as is hinted further up the wikipedia article - xvYCC is supported by any and all YUV conventionally 16-240 digital video formats (DVD, DVB, SDI, you name it) - just use the values outside this range with the matrices defined for xvYCC.

The problem is that signalling to say "this is xvYCC" is not defined for most of those formats. The thing is though: the 16-240 range is the same as it's always been, so treating everything as xvYCC shouldn't be a problem in theory. In practice, I bet some displays do chose to treat it differently, with one or other representation tweaked in a not strictly accurate way.

Cheers,
David.

Saving music from destruction: upsampling

Reply #40
Cool, an off-topic discussion about color vision!

The chart SCOTU doesn't show the sensitivity of the cones, it's the color matching functions that are used in the CIE XYZ color spaces "standard observer."

Here's the normalized response spectra for the three different visual receptors (cone cells) :



The short wavelength cones sensitivity peaks nearer to violet, the middle ones are more or less at green, and the long ones yellow with a large overlap with the middle ones.

The problem with making a trichromatic display that covers the whole gamut of visible color is that the red and blue primaries have to be so far near the edges of the visible spectrum (so that they stimulate only the short and long receptors) that they have to be extra bright to make up for the reduced sensitivity. Lasers are one option, but then you get problems with speckle patterns.

Saving music from destruction: upsampling

Reply #41
Quote
xvYCC is not supported by DVD-Video or Blu-ray, but is supported by the high-definition recording format AVCHD and PlayStation 3.
That's not really true, as is hinted further up the wikipedia article - xvYCC is supported by any and all YUV conventionally 16-240 digital video formats (DVD, DVB, SDI, you name it) - just use the values outside this range with the matrices defined for xvYCC.

The problem is that signalling to say "this is xvYCC" is not defined for most of those formats. The thing is though: the 16-240 range is the same as it's always been, so treating everything as xvYCC shouldn't be a problem in theory. In practice, I bet some displays do chose to treat it differently, with one or other representation tweaked in a not strictly accurate way.

Cheers,
David.

If wikipedia is wrong on this, someone should fix it.

When you playback DVD/Bluray on your PC (and perhaps some embedded boxes?), the video may be converted umpteen times between 16-235/240, 0-255, YCbCr and sRGB - depending on the divers, OS, hardware, interface, etc. There are guides for how to "hack" you Nvidia/ATI driver into messing up as little as possible - and often they will only work for either SD or HD media using specific application software. Bah... What are the odds that ALL of those conversions will pass the codes outside the regular range unharmed? To be shown on a LCD display with highly non-gamma-ish native response and 6 native bits, dithering 8-bit input. And many intermediate calculations are bound to be requantized to 8 bits, probably without any dithering.

Even though a full 8 bits (or is it 255-2 codes?) are allowed, displays and lossy codecs are allowed to do whatever they want to them. I do believe that xvYCC needs signalling and certification to be of practical use.

-k

Saving music from destruction: upsampling

Reply #42
Quote
xvYCC is not supported by DVD-Video or Blu-ray, but is supported by the high-definition recording format AVCHD and PlayStation 3.
That's not really true, as is hinted further up the wikipedia article - xvYCC is supported by any and all YUV conventionally 16-240 digital video formats (DVD, DVB, SDI, you name it) - just use the values outside this range with the matrices defined for xvYCC.

The problem is that signalling to say "this is xvYCC" is not defined for most of those formats. The thing is though: the 16-240 range is the same as it's always been, so treating everything as xvYCC shouldn't be a problem in theory. In practice, I bet some displays do chose to treat it differently, with one or other representation tweaked in a not strictly accurate way.

Cheers,
David.

If wikipedia is wrong on this, someone should fix it.

When you playback DVD/Bluray on your PC (and perhaps some embedded boxes?), the video may be converted umpteen times between 16-235/240, 0-255, YCbCr and sRGB - depending on the divers, OS, hardware, interface, etc. There are guides for how to "hack" you Nvidia/ATI driver into messing up as little as possible - and often they will only work for either SD or HD media using specific application software. Bah... What are the odds that ALL of those conversions will pass the codes outside the regular range unharmed? To be shown on a LCD display with highly non-gamma-ish native response and 6 native bits, dithering 8-bit input. And many intermediate calculations are bound to be requantized to 8 bits, probably without any dithering.

Even though a full 8 bits (or is it 255-2 codes?) are allowed, displays and lossy codecs are allowed to do whatever they want to them. I do believe that xvYCC needs signalling and certification to be of practical use.
I wouldn't dream of claiming that most PCs handle standard Rec.601 and 709 colour space correctly, never mind xvYCC. They often clip the range, introduce banding, and output 0-255 sRGB.

But if you're talking about stand-alones, with either analogue or digital connections in YUV space, most just send what's on the disc. I've tested this.

lossy codecs don't treat values outside the "valid" range any differently from values within in. Some VFW codecs specifically require RGB or YUV, which doesn't help, but as long as it accept YUV, it doesn't care whether a pixel is 15 or 17 - I've never found an encoder clamps internally.

FWIW most consumer camcorders generate luma (Y) over the range 16-255 - i.e. making full use of the "super white" range above 235. So handling out of range values isn't a new problem. It goes back to the analogue days.

Cheers,
David.

Saving music from destruction: upsampling

Reply #43
FWIW most consumer camcorders generate luma (Y) over the range 16-255 - i.e. making full use of the "super white" range above 235. So handling out of range values isn't a new problem. It goes back to the analogue days.

But handling as in "do not crash, burn or show purple dancing dots" is not the same as "taking advantage of to show superior images that would otherwise be impossible". For many (most?) end-users, you might substitute a clipper early in the distribution chain for a clipper late in the distribution chain, adding some redundant information that makes life a little harder for codecs.

I think that the headroom/footrom was added so as to allow for linear processing by scalers etc that might be daisychained, analog as well as digital.

I do agree that it has some nice properties for introducing superior quality in a "soft" way, where end-users can upgrade components one at a time, and at some point in the future be able to gain the full benefit. How large is that benefit anyways? 32/255 is not a whole lot more added information, but perhaps coarse, right-ish information is enough for the bright highlights and the very dark shadows and the very saturated colors?

-k

 

Saving music from destruction: upsampling

Reply #44
Consumer camcorders don't do it to look genuinely better, just brighter. It's a bit like the "louder = better" thing.

The headroom was to allow for inevitable overshoots due to sharpening, processing etc.

The xvYCC use of the chroma headroom is clever. The way they defined the curve below 16 (and the way this creates "negative" RGB values) makes for a surprisingly large area of extra colour - though very few images really need it.

Cheers,
David.