Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’ (Read 99255 times) previous topic - next topic
0 Members and 4 Guests are viewing this topic.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #100
So if you take your "jagged" join-the-dots line and join the dots with a sine wave curve, you'll find that there's exactly one curve that will join all the dots: one with the same frequency as the original frequency. Drawing that curve is exactly what the D to A reconstruction filter does.


No disagreements there, I was just wondering how it can perfectly reproduce the wave when the sample points have to be approximated, the ideal position for them lies at the end of an infinite series of decimals so naturally those get truncated at one point or another and what I'm wondering is if this truncation implies any errors in the reproduction concerning amplitude or the resulting frequency itself.

The deviations can't be that large since obviously PCM works but I'm wondering how large can they be for those frequencies that creep up closer to the given sampling rate?

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #101
If you were to look at the waveform in an editor that actually assumes proper reconstruction you will in fact see a proper sine wave regardless of the frequency.  This is all about the specific application you used to display the waveform.

Your application is displaying it incorrectly!

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #102
The deviations can't be that large since obviously PCM works but I'm wondering how large can they be for those frequencies that creep up closer to the given sampling rate?

And another link:

http://www.hydrogenaudio.org/forums/index....

Now read the post especialy from Woodinville. I don´t understand all this but since you asked... Btw. when you use search in here you´ll find several similar threads dealing with such things.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #103


xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #105
No disagreements there, I was just wondering how it can perfectly reproduce the wave when the sample points have to be approximated, the ideal position for them lies at the end of an infinite series of decimals so naturally those get truncated at one point or another and what I'm wondering is if this truncation implies any errors in the reproduction concerning amplitude or the resulting frequency itself.


The displayed sample points only appear to be chaotic. They are actually accurately positioned to the limits of the numeric resolution. There is one sine curve that exactly (to the limits of the resolution) passes through all of the sampling points. Any inaccuracy introduced by "truncation of an infinite series" appears as noise in the output from the D to A conversion. This noise will be equal to or less than that represented by the least significant bit of the digital signal. 

When you digitise a signal to 16 bits, each sample has to be represented by one of 65535 different numbers. If it falls part way between two possible values, it is set to the nearest. This means the digitised value can be up to half a "bit's worth" different than the original input. When you reconvert it back to analog, the output can be up to half a bit plus or miinus different than the original. This is noise, because it is different than the original signal.

For a 16 bit signal, this noise is insignificant. Try it for yourself. Use Reaper or Audacity to generate a 1 KHz signal using all 16 bits (0dBfs). Do it again using only the least significant bit (-96 dBfs). Play the 16 bit signal at the loudest level you can stand. Without changing the volume setting, play the 1 bit signal. Can you hear it?  (I doubt it).

Once you understand this, we'll move on to dither.

Edit: mjb2006's picture would have saved me a thousand words if I'd seen it in time...
Regards,
   Don Hills
"People hear what they see." - Doris Day

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #106
True, but each time they repeat they are different then before,


I didn't realize you had picked such an unround number as 13001, so the periods you showed don't actually repeat exactly.  But the function is still periodic, just with a much longer period.  They have to repeat every LCM(44100,13001) samples given that the underlying sin function is itself periodic. 

dividing up the sampling rate to that particular frequency results in a number with an infinite number of decimal places, approximating the location of one of the sampling points will influence the position of the next sample point and so on


This isn't true because subsequent values of sin(x) do not depend on previous values.  How could they?  sin(x) has an infinite domain, so if it could only be computed recursively it would be impossible to compute it at all.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #107
Your application is displaying it incorrectly!
No need to shout
Why not propose him an application that does it right (or at least better) ? My favorite (and cross-platform) is iZotope RX, which has a free demo version available here.
You can vary the "Waveform interpolation order" in the Preferences/Display between 0 and 64.
Nothing beats empirical evidence IMO.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #108
Probably because I am WindowsAdobe Audition-centric and don't know of other programs.  I'm shouting because I provided an immediate (and correct!) explanation to the dilemma which was ignored and now the discussion has needlessly gone off-topic.  I'm tempted to split it off now.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #109
No disagreements there, I was just wondering how it can perfectly reproduce the wave when the sample points have to be approximated


By 'knowing', in the appropriate sense, that it is a sine wave. In principle, not very far from the way that I can perfectly reproduce a line using merely two distinct points: by knowing that it is a line.

Not that it matters. A 20 kHz sine wave and a 20 kHz triangle-shaped wave (or whatever shape!) sound the same, as the difference is above the hearing limit. Only the sine wave part of the triangle will contribute to the ?20 kHz range.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #110
No disagreements there, I was just wondering how it can perfectly reproduce the wave when the sample points have to be approximated


Look up the Nyquist–Shannon sampling theorem. Because it's known that the highest possible represented frequency is X, there is only 1 possible frequency below X that can match any 2 sample values. Based on that fact the DAC will "connect" the 2 sample points with a voltage curve that matches that single possilbe frequency below X...

The fact that the samples can only be quantized to 65536 levels introduces what's known as quantization noise. At 16 bit resolution you can basically ignore it for audio delivery.

Not that it matters. A 20 kHz sine wave and a 20 kHz triangle-shaped wave (or whatever shape!) sound the same, as the difference is above the hearing limit. Only the sine wave part of the triangle will contribute to the ?20 kHz range.


It should be stated that a 20kHz triangle wave can't be represented with 44.1kHz sample rate. It would be low pass filtered to a sine wave before being sampled.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #111
there is only 1 possible frequency below X that can match any 2 sample values. Based on that fact the DAC will "connect" the 2 sample points with a voltage curve that matches that single possilbe frequency below X...


That is wrong. The number of samples required, approaches infinity as frequence approaches sampling frequency / 2. For any frequency below, you can do with a finite (but possibly large) number of samples.

Simplify down to 3 bits, i.e. values 0 through 7. Pick two successive samples, say, a 4 and a 3. Matched by a 7654321012345676543210123456 sequence, a 6564321234565432345 sequence, a 543234543234543234 sequence and a 4343434343434343434343.  What's the lowest frequency in each of those signals?

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #112
I know the waveform can be rebuild since obviously I can hear it, but I can't help but doubt that it's a more of an imperfect reconstruction then compared to pretty much anything lower then a 11025Hz sine.


I'll put your mind at rest. A digital reconstruction of any digitized or synthesized analog signal is always imperfect for theoretical and/or practical reasons.

I think you're asking the wrong question. The most important question is not whether the reconstruction is imperfect, but rather how those imperfections compare to what you get when you do the same thing by other means, for example by keeping the signal in the analog domain.

When it comes to storing music, digital from day one of consumer digital audio back in 1982 has always been vastly more perfect than analog media.

When it comes to playing music, the digital outputs of music players are always vastly more perfect than their analog outputs, at any price from high to low.

When it comes to passing music through even a single audio component such as an AVR the same thing is true.  Digital (DSP-based) AVRs have always been vastly more perfect than analog AVRs at any price from high to low.

When it comes to loudspeakers, if we had implementations of speakers that were as sophisticated as our AVRs, the speakers that were more fully implemented in the digital domain would have a greater potential to be accurate.  That day is coming!


xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #113
I know the waveform can be rebuild since obviously I can hear it, but I can't help but doubt that it's a more of an imperfect reconstruction then compared to pretty much anything lower then a 11025Hz sine.


I'll put your mind at rest. A digital reconstruction of any digitized or synthesized analog signal is always imperfect for theoretical and/or practical reasons.


I'd note here that an analog recording of an acoustic event -- say, a sound recorded on magnetic tape  -- is always imperfect.  And an analog reformatting of a recorded signal (e.g., an LP pressing sourced from a tape, or from digital ) -- is also always imperfect.  Look at how the imperfections pile up!   


xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #114
Just want to point out that there's a sustantial diff in terms of peer-review between AES convention presenations/publications, and JAES publications.  Oohashi et al. never made it past convention, as far as I can tell.  Their work ended up in a low-impact neurophysiology journal.


Ah, I suppose that would be an important distinction, and one I wasn't aware of.  I've been an AES member on and off over the past 15 years, but mostly for the journal access.  I find it hard enough to make the local BAS meetings :-)

My point was that the publication (presentation) of the paper did not automatically elevate it beyond a status of 'let's discuss', not that the AES should be faulted for presenting it.

xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #115
Quote
Papers with interesting ideas and no data (eg, the J. Dunn 'equiripple filters cause preecho' paper, which presents a fascinating insight, even if it doesn't work out in practice)

Dunn refers to R. Lagadec and T. G. Stockham, ‘Dispersive Models for A-to-D and D-to-A Conversion Systems’ for data.  Is there data elsewhere to support that it doesn't work out in practice?


Dunn says something a little different than Lagadec, although Lagadec was certainly hinting hard at the direction Dunn went. 

In any case, Dunn presents simulations of an approximation that's a partial fit to the frequency responses of several DAC reconstruction filters in the field at the time.  The partial-ness of the fit is not accounted for in the simulated figures, nor does Dunn show any measurements of the actual filters to show if and how well they match the approximation.  Thus we don't know how good or bad the approximation is in practice.

But my primary thought is that it's a filter strategy and family that aren't used in DACs anymore.  This work is from the bad old days of high-order analog equiripple IIR anti-imaging filters, which at the time were dominant.  This is not a fault of the paper in any way, but we do not use these filters today and so it would seem the concerns simply do not apply anymore.

Of course, we also have test equipment today that should make verifying that assumption pretty easy.  I've been wondering when I'd have a lazy weekend afternoon with nothing better to do than go explore the Dunn paper experimentally.




Re: xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #119
There was quite a bit of content within the "people.xiph.org/~xiphmont" web, and it seems it's now all gone. IDK If it's intentional, but it seems weird.

There were tech demos related to Opus, Vorbis, and AFAIK, some video formats too.

IDK for how long the archive on wayback machine is held for, but I think it would be worth restoring or setting a proper mirror of the content that used to be there.



Re: xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #122
Because everybody needs to be able to hear 700kHz sounds... /s

Re: xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #123
768kHz now ;D
https://www.soundliaison.com/index.php/studio-masters/856-ray-carmen-gomes-inc

I was curious about recording sampling rate. I tried to find this info on the booklet, which is freely available on nativedsd:
https://www.nativedsd.com/product/sl1052a-ray/

Quote
Recording, mixing and mastering by Frans de Rond. Produced by Peter Bjørnild. Music arranged by Peter Bjørnild with lots of help from Carmen, Folker and Bert. Recorded at MCO, Studio 2, Hilversum, The Netherlands, on the 12th of May and the
24th of August 2019. Total time: 46:17
Catalog Number: SL-1052A
Original recording format DXD 352,8 kHz
The original recording is analog mixed and mastered to tape using a Studer A80. All other formats are converted versions of the original.

So if I understand correctly:
— sound is digitally recorded at DXD "format" (PCM-352,8 KHz) [A-D encoding]
— mixing and mastering is then recorded on analog tape [D-A]
— then it's converted again in the digital domain… at twice the original sampling rate  :o
The whole processing looks curious to me.


For the sake of curitosity I bought a 32 bit DXD triple album:
https://trptk.com/catalogue/miscellanea/
The downloaded tracks were in .wav format and zip compressed: easy to use but not very efficient.
Uncompressed wav = 29,3 GB and 22579 kbps ; zip ≈ 26 GB ; WavPack = 18,3 GB and 14038 kbps “only”.
A multichannel version is also available on their catalogue and size must therefore reach 80 GB!

I converted it to lossy for my smartphone and size shrinked to less than 200 MB. Bitrate is now much more prosaic but sound quality is as enjoying ;)

Re: xiphmont’s ‘There is no point to distributing music in 24 bit/192 kHz’

Reply #124
Perhaps to capture the so-called "tape sound" from some vintage Studer machines. And yes, flac can no longer handle these sample rates, and no usable 32-bit encoder. WavPack has a chance to dominate the Hi-Res market now 8)