Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Is the 'digital' in audio mostly 'lossy digital'? (Read 9536 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Is the 'digital' in audio mostly 'lossy digital'?

Hey all,

I've got a programmer background and I've done some "crazy" crappy computer "music programming" back in the days, like writing a .mod player for the Atari ST that allowed to play .mod from the Amiga on the Atari ST.  So, well, I do know a bit about bits. 

But I'm a bit lost when it comes to "digital audio" and what the "digital" really means here.

So I'll take an example: imagine I generate a lossless .wav at 48 kHz / 24 bits on my computer.  That file is lossless.  There cannot be any loss here: I generated that .wav and it is the way I meant it.

Now I use a "digital" USB to SPDIF adaptor like the M2Tech HiFace to "send" that .wav from my computer to a DAC. I understand a stream of 0's and 1's is sent and that, due to jittering, the stream won't come in "perfectly" in the DAC.

So it's "digital" because, indeed, we're sending 0's and 1's.  But it's also "lossy" (not in the sense that it would be using a lossy compression algorithm, but in the sense that some 0's and 1's may be "lost")?  And hence the DAC must perform some "magic" to determine the missing bits? Do I get that part correct?

If I got the above correct (please be kind with me if I was totally off), then I've got another question: why do some people do (it's just an example) .wav -> USB -> SPDIF -> DAC and not just use some fully bit-perfect way of sending the bits from the .wav to the DAC?

I mean, I'm daily working with encrypted files that are hundreds of megabytes big and where if a single bit is off, then the file is void and unusable.

Why aren't there DAC where the 'D' part uses a bit-perfect way to get its input? (or does this exist?)

That is the thing that is baffling me: why are all these "digital links" in the audio world apparently "digital but with random bit losses" instead of pure bitperfect connections like the ones I get by using a $20 ethernet switch?

Can I buy a DAC that would have an ethernet plug, a little buffer, and that can receive bitperfect inputs without any jittering issue and that would then transform that real 100% bitperfect source into an analogic signal?  This would eliminate jitter from the SPDIF 'digital but with random bit loss' problem wouldn't it!?


Is the 'digital' in audio mostly 'lossy digital'?

Reply #1
Assuming everything is in normal working order the digital signal stays "bit perfect" as long as it is digital (as I demonstrated here: http://www.hydrogenaudio.org/forums/index....t&p=626748).

The possible timing issues that may have an effect to the analog output happen inside the DAC circuit and can only be evaluated by inspecting the analog output. The analog output is never bit perfect because it is analog.

Is the 'digital' in audio mostly 'lossy digital'?

Reply #2
But I'm a bit lost when it comes to "digital audio" and what the "digital" really means here.


"Digital" means discrete in time and/or by magnitude. Usually both.

Quote
Now I use a "digital" USB to SPDIF adaptor like the M2Tech HiFace to "send" that .wav from my computer to a DAC. I understand a stream of 0's and 1's is sent and that, due to jittering, the stream won't come in "perfectly" in the DAC.

So it's "digital" because, indeed, we're sending 0's and 1's.  But it's also "lossy" (not in the sense that it would be using a lossy compression algorithm, but in the sense that some 0's and 1's may be "lost")?


No. Zeroes and ones are not lost due to jitter. Jitter causes digital samples to be misplaced in time, which in its turn leads to distortion in the analog waveform that is reconstructed from these samples. In practical systems this distortion is below the audibility threshold.

Quote
If I got the above correct (please be kind with me if I was totally off), then I've got another question: why do some people do (it's just an example) .wav -> USB -> SPDIF -> DAC and not just use some fully bit-perfect way of sending the bits from the .wav to the DAC?


The path you mentioned is bit-perfect when properly configured (that is, when no intermediate processing is taking place in software).

Quote
That is the thing that is baffling me: why are all these "digital links" in the audio world apparently "digital but with random bit losses"


They are not.

Quote
Can I buy a DAC that would have an ethernet plug, a little buffer, and that can receive bitperfect inputs without any jittering issue and that would then transform that real 100% bitperfect source into an analogic signal?  This would eliminate jitter from the SPDIF 'digital but with random bit loss' problem wouldn't it!?


Digital players/receivers with network interface do exist. But technically it is no different from s/pdif - the buffering can be just as well used with the latter. The difference is that in case of ethernet interface the clock is transferred as metadata and synthesized locally by the DAC, while with s/pdif the clock is recovered from the isochronous data stream. But reclocking can be used with s/pdif too, and some DAC boxes implement it. It's just that it is not really necessary, since phase-locked loop in receiver allows sufficiently reliable clock recovery.


Is the 'digital' in audio mostly 'lossy digital'?

Reply #4
Thanks a lot for your detailed answer.

Quote
No. Zeroes and ones are not lost due to jitter. Jitter causes digital samples to be misplaced in time, which in its turn leads to distortion in the analog waveform that is reconstructed from these samples. In practical systems this distortion is below the audibility threshold.


to be sure I kinda understand this correctly : a digital sample misplaced in time is not the same as a bit (or several bits) lost?

All bits makes it, in their correct order, to the DAC but it's just the "timing" between the sent bits that can vary?

So a SPDIF cable could be used to send, say, binary data in a totally bitperfect way?

Is the 'digital' in audio mostly 'lossy digital'?

Reply #5
So a SPDIF cable could be used to send, say, binary data in a totally bitperfect way?


Yes, a 1€ cable will do the job perfectly.

btw. even a sheet of paper will "send" and save binary data in totaly bitperfect way! Thats the beauty of digital.
.halverhahn

Is the 'digital' in audio mostly 'lossy digital'?

Reply #6
to be sure I kinda understand this correctly : a digital sample misplaced in time is not the same as a bit (or several bits) lost?


No bits are lost, just the time between them might be slightly off.  So instead of 20.83 microseconds between samples you might get 20.8301 microseconds, which means the waveform is slightly misshaped.  Usually though the timing errors are so small they don't matter unless the equipment is really bad.  Timing kHz signals isn't too hard in an age of GHz computers.


Is the 'digital' in audio mostly 'lossy digital'?

Reply #7
S/PDIF is not a purely digital transmission standard. The sample values are encoded digitally, but the samplerate is transmitted implicitly as an analog, physical dimension: elapsed time between frames. A DAC tries to reconstruct the samplerate from the S/PDIF stream and uses it to clock its analog output. A purely digital transmission standard would send data packets and include meta-information about the rate at which a discrete clock within the DAC should be operated.

Edit: I just noticed that alexeysp has essentially already written the same.

Is the 'digital' in audio mostly 'lossy digital'?

Reply #8
S/PDIF is not a purely digital transmission standard. The sample values are encoded digitally, but the samplerate is transmitted implicitly as an analog, physical dimension: elapsed time between frames. A DAC tries to reconstruct the samplerate from the S/PDIF stream and uses it to clock its analog output. A purely digital transmission standard would send data packets and include meta-information about the rate at which a discrete clock within the DAC should be operated.

Edit: I just noticed that alexeysp has essentially already written the same.


Thanks a lot to you too.  I understand better what goes through S/PDIF now 

 

Is the 'digital' in audio mostly 'lossy digital'?

Reply #9
S/PDIF is not a purely digital transmission standard. The sample values are encoded digitally, but the samplerate is transmitted implicitly as an analog, physical dimension: elapsed time between frames. A DAC tries to reconstruct the samplerate from the S/PDIF stream and uses it to clock its analog output. A purely digital transmission standard would send data packets and include meta-information about the rate at which a discrete clock within the DAC should be operated.

Edit: I just noticed that alexeysp has essentially already written the same.

How do you specify the clock-rate in a packet if you cannot trust either the sender or receiver-side clocks to be 100% accurate?

Spdif allows lossless digital transmission if a finite-length waveform is transmitted in its entirety before presentation.

1-way packet-based communication allows lossless digital transmission if a finite-length waveform is transmitted in its entirety before presentation.

I dont think that the means of clock-synchronization makes that big of a difference for the principles. It could have large practical effects, though.

-k

Is the 'digital' in audio mostly 'lossy digital'?

Reply #10
How do you specify the clock-rate in a packet if you cannot trust either the sender or receiver-side clocks to be 100% accurate?


You specify the ideal rate, that has been targeted at the time of recording. Both the PCM encoder and decoder are supposed to do their best to match it. It is no flaw of a digital transmission channel, if encoder or decoder aren't calibrated properly.

Once a PCM recording has been saved onto your disk, the ideal, originally targeted samplerate is the only information available to interpret the recorded data. Transmitting the intended samplerate digitally ensures maximum information recovery within the bounds of the decoder's clock precision.

S/PDIF transmission adds an additional layer of possible inaccuracy. A file on your disk already contains the recorder's clock inaccuracies. When you play it back over S/PDIF your player's clock inaccuracy is added to the signal, because the sample rate is transmitted as analog timing information. The receiver is left in the dark about the original, intended sample rate and instead tries to recover your player's sample rate (if your player's clock is more accurate than your DAC's that's not a bug but a feature). Without this list of pre-negotiated rates, the exact intended sample rates would be lost during transmission.

Edit: Strictly speaking, you quantize the signal by pre-negotiating a discrete set of sample rates. Within bounds one could thus claim that timing information is, albeit its analog nature, transmitted discretely.

Spdif allows lossless digital transmission if a finite-length waveform is transmitted in its entirety before presentation.


This only works by guessing and only for a limited, pre-negotiated set of sample rates. If the receiver detects an average  sample rate of 40999 Hz it will correct the bitstream to 41000 Hz, because it is the closest pre-negotiated rate. In practice, this works great.

1-way packet-based communication allows lossless digital transmission if a finite-length waveform is transmitted in its entirety before presentation.


Since the intended sample rate is transmitted digitally, this always works.

I dont think that the means of clock-synchronization makes that big of a difference for the principles. It could have large practical effects, though.


In professional environments ideal rates are transmitted digitally (unlike S/PDIF) and output clocks are synchronized explicitly (over clock synchronization protocols). For re-clocking DACs at home even small amounts of buffering are sufficient.

Disclaimer:

This all is irrelevant for your home audio enjoyment. The magnitudes discussed here a laboratory scale, far below even your faintest thresholds of hearing.

Is the 'digital' in audio mostly 'lossy digital'?

Reply #11
So I'll take an example: imagine I generate a lossless .wav at 48 kHz / 24 bits on my computer.  That file is lossless.  There cannot be any loss here: I generated that .wav and it is the way I meant it.

Now I use a "digital" USB to SPDIF adaptor like the M2Tech HiFace to "send" that .wav from my computer to a DAC. I understand a stream of 0's and 1's is sent and that, due to jittering, the stream won't come in "perfectly" in the DAC.

So it's "digital" because, indeed, we're sending 0's and 1's.  But it's also "lossy" (not in the sense that it would be using a lossy compression algorithm, but in the sense that some 0's and 1's may be "lost")?  And hence the DAC must perform some "magic" to determine the missing bits? Do I get that part correct?


No.

In general, it is assumed that all SP/DIF connections have no data losses whatsoever. If they do lose data, you get clicks, pops and if things are bad enough, you get silence from the DAC receiving the signal. The jitter is usually small enough that there is no data loss.


Quote
If I got the above correct (please be kind with me if I was totally off), then I've got another question: why do some people do (it's just an example) .wav -> USB -> SPDIF -> DAC and not just use some fully bit-perfect way of sending the bits from the .wav to the DAC?


In general it is assumed that data transmitted over a SP/DIF link is received bit-perfect.

Quote
I mean, I'm daily working with encrypted files that are hundreds of megabytes big and where if a single bit is off, then the file is void and unusable.


Audio and video have relaxed requirements.

Quote
Why aren't there DAC where the 'D' part uses a bit-perfect way to get its input? (or does this exist?)


It is usually assumed that data  transmitted over SP/DIF is received bit-perfect. The assumption generally works.  Usually, SP/DIF lines are relatively short - never cross-country. If we send data across country we generally package it up into some error-checked, retry if error, CRC error correction protocol.

Quote
That is the thing that is baffling me: why are all these "digital links" in the audio world apparently "digital but with random bit losses" instead of pure bit perfect connections like the ones I get by using a $20 Ethernet switch?


The short links like the one from your PC to a DAC are bit-perfect enough to get the job done.



Quote
Can I buy a DAC that would have an Ethernet plug, a little buffer, and that can receive bit perfect inputs without any jittering issue and that would then transform that real 100% bit perfect source into an analog signal?  This would eliminate jitter from the SPDIF 'digital but with random bit loss' problem wouldn't it!?


Many modern audio DACs can tolerate jitter.  In some cases they just fix up the recovered clock by running it through a PLL or some such. Some DACs (e.g. in surround receivers) clock the data into a buffer with a  receive clock that is good enough to avoid data loss. The clock the data out of the buffer with a really good, but variable speed data clock. If the buffer gets too empty they slow down the data clock. If the buffer gets too full, they speed up the data clock. All adjustments are done very slowly so you don't hear the changes.  The average amount of data in the buffer causes latency, but within reason, that is no problem.  Optical players work the same way and the data they get off the disc is pretty nasty, jitter-wise.  They can fix it up far better than any reasonable requirement that is based on audibility.

The digital inputs on a lot of DACs are so self-adjusting that if you set them for the wrong clock frequency, they just might work to spite you. They self-adjust to the actual average clock frequency that the data comes in on.

I've seen DACs set for 44 KHz work with data at 48 KHz without any manual adjustments, for example.  If the DAC is connected to a speaker, then usually no problemo, but if you put the data into a computer file and later on play it at the wrong clock frequency, it will have the wrong pitch.

Is the 'digital' in audio mostly 'lossy digital'?

Reply #12
So I'll take an example: imagine I generate a lossless .wav at 48 kHz / 24 bits on my computer.  That file is lossless.  There cannot be any loss here: I generated that .wav and it is the way I meant it.

Now I use a "digital" USB to SPDIF adaptor like the M2Tech HiFace to "send" that .wav from my computer to a DAC.


It should maybe be noted that most (?) sound cards will resample to 48 kHz (or 96 or 192) if you created a bit stream not at this/these frequencies. For example, a 44.1 signal will then not arrive "bitperfect". (I don't know the particular USB adaptor you mention.)

(By the way I assume that by ".wav", you intended to specify that you have saved the signal losslessly (which is what commonly is done when you write to .wav) before sending it off. Of course the actual file is not sent over S/PDIF.)


Is the 'digital' in audio mostly 'lossy digital'?

Reply #14
most (?) sound cards


I'd suggest to correct this to "some old (and nowadays rare) Creative and AC97 sound cards".


What? Progress? Cannot have been intentional.