1

##### FLAC / Re: EAC Task Completed ALWAYS finishes on 99.9%

Last post by**chamill**-

That's the correct setting. I now receive 100% task complete. Perfect!

1

That's the correct setting. I now receive 100% task complete. Perfect!

2

The initial conversation started by my friend pointing out that "its interesting how we are so much better at frequency separation using our ears then through vision. Even though sound is one dimensional, we are still better at separating two different sounds (like two different notes or instruments) then we are at two different frequencies (or wavelengths (I dont know if they are comparable and can be used interchanged)) through vision, as they merge to form a separate color." My position was that we indeed are good at separating wavelengths into individual colors, but he disagreed. But thats for a different topic, I suppose. Also, this was after a few beers so I might be quoting it incorrectly. Sounds like a fun conversation to have at a music festival, right?It is considered one-dimensional because it is a one-dimensional function. I.e. the variable determining the value (amplitude or level) is referenced by only one, scalar value, in our case time. And as we all know, time is one-dimensional, in our perception of the world, anyhow.

So to sum up how I understand it... Sound is considered one dimensional since it at any given point in time only can have one value. Sorry if this became an ELI5-type of situation, but I am glad to see that this has started a discussion among others.

What your friend says however is true when it comes to frequency separation, between vision and hearing. However, we have to consider the spectrum. We have to consider these as a fraction of our aural or visual frame. Also, human hearing and vision isn't linear. A change in tone between 50Hz and 60Hz is quite noticeable, while a change between 3000Hz and 3010Hz isn't.

It's also important to note, that it's quite difficult to discern mixed signals in aurally. Add 1kHz, 1.1kHz, and 1.2kHz, and people will have a hard time, separating these three frequencies from a single sound chunk. However in vision, our brain has only three different frequencies to work with, and mixes them rather nicely to create a color gamut. So much so, that it's easier for us to describe a color by it's hue, brightness, and saturation, rather than their red, green, and blue color values. Also, our vision isn't linear either. Our perception of blue is much weaker than green and red. Also, resolution also doesn't line up nicely either. Most of our highest sharpness is in the green. Also, the the spectral width as well as the distance between the frequency responses isn't linear or equal either. The frequency responses of the blue and green cone cells are much closer to each other, than the green and red cone cells. And to make matters worse, our perception in low-light condition changes yet again, this is because rod-cells tend to respond to blue-ish light more than further down the spectrum. In low light conditions, we see "better" with green-blue-ish light, while in bright light conditions, we see better in green/reddish light.

3

Or put very simply:.A linear array of data has one dimension as it requires only a single number to specify which piece of data is being considered. The data contained within the array is irrelevant to the dimensionality of the array itself:

The initial conversation started by my friend pointing out that "its interesting how we are so much better at frequency separation using our ears then through vision. Even though sound is one dimensional, we are still better at separating two different sounds (like two different notes or instruments) then we are at two different frequencies (or wavelengths (I dont know if they are comparable and can be used interchanged)) through vision, as they merge to form a separate color." My position was that we indeed are good at separating wavelengths into individual colors, but he disagreed. But thats for a different topic, I suppose. Also, this was after a few beers so I might be quoting it incorrectly. Sounds like a fun conversation to have at a music festival, right?

So to sum up how I understand it... Sound is considered one dimensional since it at any given point in time only can have one value. Sorry if this became an ELI5-type of situation, but I am glad to see that this has started a discussion among others.

4

Still no honey.

Probably success or failure depends on the devices involved (and how every brand implements DLNA).

Any Samsung experience around?

5

6

@Marc: THANKS VERY MUCH FOR SOLVING THIS!!

PS: solved with V2.1.6 of foo_jscript_panel, see here

7

8

f(x) = x², returns 2 for both x = 2 and x = -2It returns

9

If that is your idea of an image, then I would say that a point sound source - or a "point" as a model for an eardrum - would be 0D rather tham 1D ...I'm not sure this is helping, but that aside, the "0D" is a bit conflated in mathematics, rather if you think of something having no dimension, they're simply non-dimensional, or: scalar. A point has no dimensional attributes. A point might be addressed by coordinates and

But rather than claiming "0D", I would say that your model of an "image" might be wrong or at least not in line with your model of soumd.

Each point in the image carries a compound of (time-) frequencies. So: if you insist on "time" in a sound waveform, why don't you insist on time in the light waveform?

There are at least two answers to that latter question. 1: In how the human eye projects colour down to a triplet. But that is how humans work, not what is emitted. 2: In that you think that sound changes over time; "music", not just "chord". But then the analogy should be motion picture rather than image.

For instance you can map one 2D space into another, a common example is the conversion of polar coordinates into Cartesian coordinates, and back. Cartesian coordinates are points defined by x and y, while polar coordinates are defined by r and θ (where θ is an angle).

To convert from polar to Cartesian, you'd do: f(r, θ) = {r * cos(θ), y * sin(θ)} → {x, y}

To convert from Cartesian to polar, you'd do: g(x, y) = {sqrt(x² + y²), atan(y/x)} → {r, θ}

I.e. both functions take two values, and return two values, one 2D point, returns a 2D number.

In terms of an RGB color bitmap image, you could say that each x and y coordinate for each pixel, returns three values: r, g, b. Of course we can map this number onto a linear scale (since most are limited for all color spaces), but theoretically the color plane is infinite, and cannot be linearized like we do in a fixed color gamut, like 24-bit color, etc. So in these terms, the pixel coordinates in a color picture, return a three-dimensional number value. In case we have a grayscale image, where each pixel is just one number, each pixel coordinates return a scalar value.

Each higher-order value, can be composed of an arbitrary number of dimensions, including scalar. In cases of an RGB color image, each two dimensional pixel coordinate, of which each component is scalar, maps to a three-dimensional value, where each component of that value is a scalar as well.

A

Higher order objects also exist, things like hypercubes, in 4D space, etc. Anything of a higher order than a point, is a set of points.

I believe this is kinda where the confusion of Op comes from. Plotting a waveform is essentially a function that maps all values of a 1-dimensional discrete function into a 2-dimensional discrete plane, where each valid point in the mapped function is assigned one color, and each invalid point no color (background).

Having said that, the statement "Sound is one-dimensional" is incredibly ambiguous. In terms of signal definitions it is, but in terms of propagation in space it isn't. So, yeah...

10