Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Resampling and Fidelity (Read 24253 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Resampling and Fidelity

They also support resampling to higher sampling rates, which is also pointless. Playback at the encoded sampling-rate is actually technically higher-fidelity than anything involving a resampler.

What is the argument behind your second sentence?

The way I see this is that all/most modern D/A-converters works at some high, unknown sample-rate. Playback of e.g. 44.1kHz material will involve one or more sample-rate conversions, the question in only which gives the prettiest measurements.

-k

Resampling and Fidelity

Reply #1
The way I see this is that all/most modern D/A-converters works at some high, unknown sample-rate. Playback of e.g. 44.1kHz material will involve one or more sample-rate conversions, the question in only which gives the prettiest measurements.

Let's assume that you're correct and that all/most modern D/A-converters works[sic] at some high, unknown sample-rate. I don't think this is accurate (neither of my "good" interfaces do: Edirol UA-25 and E-MU 0404, unless you're talking about internals past delivering audio at 44.1kHz to the DAC), but let's assume.

Then, playback of e.g. 44.1kHz material will involve one or more sample-rate conversions. But playback of any material will involve SRC in hardware, unless you know this high sample-rate it's SRCing to. So why add one more SRC step in software when hardware's doing it already? High-fidelity focuses on altering the signal less, not more.

Resampling and Fidelity

Reply #2
Let's assume that you're correct and that all/most modern D/A-converters works[sic] at some high, unknown sample-rate. I don't think this is accurate (neither of my "good" interfaces do: Edirol UA-25 and E-MU 0404, unless you're talking about internals past delivering audio at 44.1kHz to the DAC), but let's assume.

I am talking about the actual crystal driving the single-bit/few-bit hardware. "True" 44.1/48kHz DACs are very uncommon, I believe. It is supposedly easier to make high-precision timing than high-precision amplitude. Therefore, current designs use only one (or usually a few) bits changed at a very high samplerate to produce something that is close to the theoretical limits of e.g. 16 bit/44.1 kHz.
Quote
Then, playback of e.g. 44.1kHz material will involve one or more sample-rate conversions. But playback of any material will involve SRC in hardware, unless you know this high sample-rate it's SRCing to. So why add one more SRC step in software when hardware's doing it already? High-fidelity focuses on altering the signal less, not more.

The point is purely academic, I have no indications that this really matters.

There is no evidence, I believe that converting from 44.1kHz to a much higher rate, X, will always be better using unknown rate converter A, than using two unknown stages, lets call them B1 and B2.

For this to have any relevance, the internal rate-conversion of the DAC not only would have to be pretty bad, but it would also have to differ in "badness" depending on input samplerate.

-k

Resampling and Fidelity

Reply #3
Yeah, I carefully avoided saying that any of it has any effect on audio quality. It does have an impact on processor use, however.

Viewed in terms of signal-sent-to-interface, unprocessed signal is concretely higher-fidelity than processed signal. What the interface does to render its signal is something else altogether and is neither known nor the same for all interfaces. What we do know is that the data we are sending to the interface is either identical to the bits provided by the decoder, ie. unresampled, or modified, ie. resampled.

Arguing the case that the modified signal is the higher-fidelity one is ludicrous.

Resampling and Fidelity

Reply #4
The way I see this is that all/most modern D/A-converters works at some high, unknown sample-rate. Playback of e.g. 44.1kHz material will involve one or more sample-rate conversions, the question in only which gives the prettiest measurements.

Let's assume that you're correct and that all/most modern D/A-converters works[sic] at some high, unknown sample-rate. I don't think this is accurate (neither of my "good" interfaces do: Edirol UA-25 and E-MU 0404, unless you're talking about internals past delivering audio at 44.1kHz to the DAC), but let's assume.


Not much of an assumption.  I don't think they've made 44.1khz DACs since the 80s, and 48khz dacs ever.  Modern CMOS devices run at MHz because transistors are pretty fast these days.

Then, playback of e.g. 44.1kHz material will involve one or more sample-rate conversions. But playback of any material will involve SRC in hardware, unless you know this high sample-rate it's SRCing to. So why add one more SRC step in software when hardware's doing it already? High-fidelity focuses on altering the signal less, not more.


If you're converting the sampling rate, you're altering the signal

Theres not really "more" or "less".  Its kind of a binary thing when you convert one signal into a completely different signal.  I suspect you know this and are just trying to tip toe around TOS#8 by not explicitly mentioning quality, when of course this is what you really mean. 

Viewed in terms of signal-sent-to-interface, unprocessed signal is concretely higher-fidelity than processed signal.


This is just semantic crap.  If you want to say that resampling hurts quality, then go for it.  Theres no need to bullshit like this though.

Resampling and Fidelity

Reply #5
Viewed in terms of signal-sent-to-interface, unprocessed signal is concretely higher-fidelity than processed signal. What the interface does to render its signal is something else altogether and is neither known nor the same for all interfaces. What we do know is that the data we are sending to the interface is either identical to the bits provided by the decoder, ie. unresampled, or modified, ie. resampled.

Arguing the case that the modified signal is the higher-fidelity one is ludicrous.


The big picture must be considered.

In the big picture, we *never ever* had the kind of performance with non-oversampling coverters that we now routinely have with oversampled converters.

If you also consider price/performance, then the comparison is even more extreme.  We've got sub $5 chips that do things that formerly couldn't be done at any reasonable price..

The bad news may be that oversampling implies some kind of quality loss at the point of oversampling, but that loss is microscopic compared to the benefits that can only be obtained with oversampling.

To think otherwise is to agree with the blitzoid/nutzoid high end audiophiles who are trying to bring back ca. 1980s non-oversampled converter chips, and solving the many problems of analog filtering by simply leaving the filters out.  (I wish I was making this up!)



Resampling and Fidelity

Reply #6
So then I pose the question: Can resampling be done without negatively impacting the signal at all? I know I can't ABX a good resampler. I also know the software end, and it makes the most sense to me to send exactly the signal you want reproduced to the DAC, not some oversampled interpolation. Is this logic incorrect? I have real difficulty accepting that a modified signal is preferable to a pristine one anywhere up until the actual DAC. Modeling the system in terms of information content and entropy, it would seem like it should be possible to do without losing any information, but I can't prove that.

Resampling and Fidelity

Reply #7
So then I pose the question: Can resampling be done without negatively impacting the signal at all?


I don't see how,

Quote
I know I can't ABX a good resampler.


I suspect you can't ABX the mediocre ones, either. It takes a pretty poor excuse for a resampler to have audible artfacts.

Quote
I also know the software end, and it makes the most sense to me to send exactly the signal you want reproduced to the DAC, not some oversampled interpolation.


For the most part, that is what happens. The oversampling happens inside the DAC,

Quote
I have real difficulty accepting that a modified signal is preferable to a pristine one anywhere up until the actual DAC.


There is a lot of logical space between pristene and audibly corrupted.  Good resampling is just one of those things that you do when you have to do it, and not get bent out of shape about it.


Quote
Modeling the system in terms of information content and entropy, it would seem like it should be possible to do without losing any information, but I can't prove that.


I see no reason to get all uptight and unhappy about signal processing, when it serves a purpose. We generally have no clue about what happens when the music was recorded and produced. It either sounds good or it doesn't, and if it sounds good, why worry?  Making music is a lot like making laws and sausages. You don't want to be there when it happens if you are too squeamish.

Resampling and Fidelity

Reply #8
So then I pose the question: Can resampling be done without negatively impacting the signal at all?


Search. We've done this a million times. 

I know I can't ABX a good resampler. I also know the software end, and it makes the most sense to me to send exactly the signal you want reproduced to the DAC, not some oversampled interpolation. Is this logic incorrect?  I have real difficulty accepting that a modified signal is preferable to a pristine one anywhere up until the actual DAC.


Yes its incorrect.  You're forgetting that what matters is the actual output, not some imaginary notion of "fidelity" or being "pristine".  If these things mattered, you would be able to ABX them.  You cannot.  Therefore ...

More formally, I would say this a case of converse error.  You're essentially taking the notion that good output implies good input and trying to reverse the implication such that you claim good input means good output, and thats not really true because once you disregard the notion that output quality matters, then you don't really have any basis to claim something is "good" or "bad".  And you certainly don't know that something that looks good to you will look good to your DAC.  Or some other DAC.  Or any possible DAC that could be made . . .

Resampling and Fidelity

Reply #9
So then I pose the question: Can resampling be done without negatively impacting the signal at all?
Search. We've done this a million times.
"In theory, yes; in practice, no." is the data I collected, which leads me to the conclusion that it's better to send the interface unresampled data. But you say this is incorrect!

You're forgetting that what matters is the actual output, not some imaginary notion of "fidelity" or being "pristine".  If these things mattered, you would be able to ABX them.
So measurable but un-ABX-able differences don't matter? I can't possibly agree with that. If that was the case, there would be no reason to choose lossless over lossy. Just because I can't necessarily hear the difference doesn't mean that the difference doesn't matter.

I am not forgetting about the output. I'm just considering the input specifically, because it's the input that resampling affects.

You're missing the context of the question and the subsequent discussion. Is there any point to resampling above the encoded rate? While there are cases where there might be a point, I'd say that unless you have a reason, the answer is no. Data is lost, ie. fidelity is lost, ie. resampling is lossy, ie. don't do it unless there's a reason.

The OP asked specifically for my reasoning. I tried to do exactly that. I am not an EE, I am a computer scientist, so I simply stated my assumption that he was right. I was not attempting to be snarky. Assumption is a valid formal technique; I use it all the time in mathematical reasoning. I was giving an explanation of the parts that I know, ie. everything up until the DAC starts doing its magic.

Resampling and Fidelity

Reply #10
You're missing the context of the question and the subsequent discussion. Is there any point to resampling above the encoded rate? While there are cases where there might be a point, I'd say that unless you have a reason, the answer is no. Data is lost, ie. fidelity is lost, ie. resampling is lossy, ie. don't do it unless there's a reason.

For all practical reasons, as well as elegance, I'd say "keep your source rate as long as possible. Trust the over-sampled DAC-designer to do a good sample-rate conversion". I.e. the same advice that you are giving. The only disagreement between us seems to be in the confidence in one fidelity vs the other.

I can produce a purely hypothetic example to prove my point:
-Given a 44.1kHz CD signal
-Access to some software-based high-quality resampler
-An oversampling DAC of otherwise high quality that use linear interpolation to produce a 128x rate
For this example, the resampling carried out in the DAC is ridicously bad. If software produce a clean, high-rate intermediate signal, the end-result will not suffer as much.

Resampling 44.1->96kHz before transmitting across a uni-directional communications channel (spdif) when noise/limited bandwidth is an issue, followed by some bad DAC that rejects jitter poorly may lead to some interesting philosophical/measurable differences.

-k

Resampling and Fidelity

Reply #11
So measurable but un-ABX-able differences don't matter? I can't possibly agree with that. If that was the case, there would be no reason to choose lossless over lossy. Just because I can't necessarily hear the difference doesn't mean that the difference doesn't matter.



I'm not sure you really mean what you appear to be saying here. If differences are inaudible  - and so un-ABX-able - how can they possibly matter? Are you unhappy with whatever amplification you're using because it has inaudible, un-ABX-able distortion levels? Would you buy a new amp because it has better distortion figures even though you can't hear a difference?

Many people are perfectly happy with lossy formats purely because they can't ABX the result - there are many other reasons to choose lossless but ABXablity isn't necessarily the most important.

Or are you saying that even though you can't hear a difference somebody else may be able to so it still matters? If so I'd agree up to a point. But once you pass the threshold of human capability any differences, theoretical or real, can't matter.


Resampling and Fidelity

Reply #12
So measurable but un-ABX-able differences don't matter? I can't possibly agree with that. If that was the case, there would be no reason to choose lossless over lossy. Just because I can't necessarily hear the difference doesn't mean that the difference doesn't matter.
I wholeheartedly agree. Unfortunately there doesn't seem to be consensus amongst HA moderators about this subject. IMO it's perfectly possible to discuss measurable differences without touching the subject of audibility.
Re. Resampling, I think you're too much focussed on signal integrity in the digital data domain. Conversion to the analogue domain (DAC) is never lossless. Your reference should be the original analogue signal and not your (lossy sampled and quantized) data. If modification of the data results in a better reconstruction of the original signal, I can see no reason to reject it.

Resampling and Fidelity

Reply #13
So measurable but un-ABX-able differences don't matter?


They certainly don't matter if adding an un-ABXable difference in one part of the signal chain breaks technial a log jam that enables the whole chain to suddenly become un-ABXable where it formerly wasn't.

They certainly don't matter if the un-ABXable difference allows the whole system to meet some other important goal like cost or size.

They certainly don't matter if the whole chain is also un-ABXable.

Quote
I can't possibly agree with that. If that was the case, there would be no reason to choose lossless over lossy.


I produce distributed media and I routinely choose lossy over lossless to meet other important goals like producing files for long events that would not be downloaded otherwise because they would be impossibly large and take too long to download for most people's patience.  I produce video media and there is no practical lossless alternative for video at all.

Quote
Just because I can't necessarily hear the difference doesn't mean that the difference doesn't matter.


I never said that. I agree with the idea that just because a difference is inaudible all by itself doesn't mean that in a real-world situation it doesn't create differences that stack up and eventually become audible.

Quote
I am not forgetting about the output. I'm just considering the input specifically, because it's the input that resampling affects.


Do you know how many times the media you listen to is resampled during production?  For example, A lot of production is done using mixtures of analog and digital equipment. Because the digital equipment has to have analog inputs and outputs to interface with the other analog-only equipment, it has resampling ADCs at the inputs and resampling DACs at its output.  There may be other processors in the production chain that have digital cores but also have analog inputs and outputs. There may be situations where equipment is digital, but its interfaced with analog cables beause that is the custom or the legacy.

One of the ironies of life is that most basement and bedroom studios are based on DAWs, which digitize the outputs of mic preamps and keep the music in the digital domain right through burning the CD. The megabuck studio across town may be one of the mixtures of analog and digital equipment that I'm talking about...  Guess which one resamples less?  ;-)

Quote
You're missing the context of the question and the subsequent discussion. Is there any point to resampling above the encoded rate?


Its done in ADCs and DACs as a matter of course.  It may make sense to upresample encoded data to apply EFX in the digital domain because strong nonlinearities when applied in the digital domain (e.g. EFX) can create imaging which has no analog equivalent and may not be what the artistic people wanted to hear.  When music passes through the land of video it is likely to be sampled at 48 KHz, but if you want to make a CD it *must* be sampled at 44.1 KHz.  There's three real world examples for you.

Quote
While there are cases where there might be a point, I'd say that unless you have a reason, the answer is no. Data is lost, ie. fidelity is lost,


Let me cry a river of tears for all the poor little data that are lost forever. ;-)

Quote
ie. resampling is lossy, ie. don't do it unless there's a reason.


That's just it. The reasons abound. If music was resampled a half dozen times during production, why are you obsessing about one more resampling job in your audio system?

Quote
The OP asked specifically for my reasoning. I tried to do exactly that. I am not an EE, I am a computer scientist, so I simply stated my assumption that he was right. I was not attempting to be snarky. Assumption is a valid formal technique; I use it all the time in mathematical reasoning. I was giving an explanation of the parts that I know, ie. everything up until the DAC starts doing its magic.


Data is lost when an analog signal goes through any copper wire.  How much do you worry about that?

Resampling and Fidelity

Reply #14
Re. Resampling, I think you're too much focussed on signal integrity in the digital data domain. Conversion to the analogue domain (DAC) is never lossless. Your reference should be the original analogue signal and not your (lossy sampled and quantized) data. If modification of the data results in a better reconstruction of the original signal, I can see no reason to reject it.


Totally agree.

Every proper DAC needs to apply a lowpass <22.05 kHz for a 44.1 kHz source signal. Any oversampling DAC will first upsample, then apply the <22.05 kHz lowpass D1 in the digital domain, then a cheap analog lowpass A to reconstruct the upsampled signal.

The quality of D1 is constrained by cost and allowed latency. You have plenty of both in a pure playback environment on a modern computer. for as little as 0.5% of you CPU time and several ms of delay you can apply digital filtering D2 of insanely high quality. Yes you will have digital filtering applied twice, D2 and D1, but D2 will be practically flawless and D1 can operate at a much higher cutoff, where it can cause much less artifacts within the original band (0-22.05 kHz). So yes, not audibly but measurably, high quality software upsampling can be superior. Exceptions are that you shouldn't go over the sample rate the DAC may use internally (e.g. ~110 kHz for a DAC1), and the optimal capacity of the clock and transmitter/receiver pair. As long as the latter is within bounds using high quality software upsampling should provide measurably superior results with almost no exceptions.

There's only one catch, the expected gains are so small, that it's really not worth debating about it.

Resampling and Fidelity

Reply #15
Mrrrh, you guys really want to argue this. We're all mostly agreeing anyhow. I'll believe that resampling produces a better reconstruction when I see evidence of it, otherwise I'm going to stick with the axiom that less modification is better.

Resampling and Fidelity

Reply #16
Mrrrh, you guys really want to argue this. We're all mostly agreeing anyhow. I'll believe that resampling produces a better reconstruction when I see evidence of it, otherwise I'm going to stick with the axiom that less modification is better.


I don't think anyone ever disagreed that "less is better except for when more is better" is true.  Its actually a meaningless tautology so it can't be wrong.

The underlying point which I think you keep missing is that "modification", "fidelity" and "pristine"  are not real things.  They're just euphemisms for underlying technical problems which do not have simple axiomatic solutions like you seem to want.  An obvious example is the oversampling DAC, which digitally modifies the samples (by changing the sampling rate) to produce exponentially more accurate reproduction.  If one thinks of resampling as a modification, and modification as bad, then one would conclude the more accurate reproduction is bad.  This is nonsense.  What you have to look at is the output of the system, the actual sound produced.  If a "modification" as you call it improves output, then its good.  If it makes it worse, then its bad.  Thats how you should think about it. 

Resampling and Fidelity

Reply #17
Mrrrh, you guys really want to argue this. We're all mostly agreeing anyhow. I'll believe that resampling produces a better reconstruction when I see evidence of it, otherwise I'm going to stick with the axiom that less modification is better.


Just compare the performance of any really good modern up sampling DAC or ADC to the performance of the non-up sampling converters that went before it. 

Just compare the phase and amplitude response of any really good upsampled digital filter based converters to that of their analog filter predecessors.

Resampling and Fidelity

Reply #18
Mrrrh, you guys really want to argue this. We're all mostly agreeing anyhow. I'll believe that resampling produces a better reconstruction when I see evidence of it, otherwise I'm going to stick with the axiom that less modification is better.


Just compare the performance of any really good modern up sampling DAC or ADC to the performance of the non-up sampling converters that went before it. 

Just compare the phase and amplitude response of any really good upsampled digital filter based converters to that of their analog filter predecessors.


I'll save you the trouble:

http://rmaa.elektrokrishna.com/Comparisons...20vs%20H300.htm

The clip is oversampled, the Hifiman isn't, so the clip has high frequencies, the hifiman doesn't.

Resampling and Fidelity

Reply #19
The clip is oversampled, the Hifiman isn't, so the clip has high frequencies, the hifiman doesn't.


Comparing different pieces of hardware isn't really helpful here. This thread is also not about oversampling vs. not oversampling but whether resampling before oversampling can improve results.

To demonstrate that the same DAC should be measured twice, fed with resampled and original data.

I won't spend my time for this, as I already know the outcome. Think about it this way: You can build a digital lowpass of arbitrary quality by increasing its window size. You can't eliminate ringing but increasing the window width can spread it over time, thus decrease its amplitude. So you can always throw as much CPU power into a software resampler to surpass a DAC's digital lowpass as many times as you want. At the same time this pushes the cutoff in the DAC's lowpass filter so high, that even a cheap filter won't touch your cleanly filtered passband anymore.

Resampling and Fidelity

Reply #20
The clip is oversampled, the Hifiman isn't, so the clip has high frequencies, the hifiman doesn't.


Comparing different pieces of hardware isn't really helpful here.


It is obviously not helpful to you,  If we make the reasonable presumption that the relatively sloppy frequency response of the Hifiman is due to its avodiance of oversampling, then my point is exactly made.

Quote
To demonstrate that the same DAC should be measured twice, fed with resampled and original data.


That would be the ideal, but the Hifiman versus Sansa teast data tells a story that is consistent with what we know a priori about the performence of analog versus digital reconstruction filters.

Your comments about the preformance of various filers that are coomonly being implemented with digital filters are irrelevant, because they are not the only kinds of filters that can be implemented with digtial filters. They are simply the filters that people are trying to implement right now.

AFAIK, the amplitude and phase responses that  the GIC (Generalized Impedance Converter)  filters that the PCM 1704 spec sheet talks about could be impemented with digital filters, but doing the inverse could easily be impractical.  Active analog filters need precision parts and can involve so many stages that dynamic range and distortion performance suffers badly.

You're the one that is making the exceptional claim, namely that a non-oversampled DAC can have frequency and phase response that is competitive with the best oversampled DACs. Therefore it is up to you to provide evidence that supports your claim. 

Interetingly enough, the PCM1704 does have a mode of operation that is compatible with digital filters.

The HiFiman Rightmark tests look like pretty good evidence to me, but if you want to trash them. well then fine, come up with other evidence. Otherwise, you are sitting there with a questionable argument and no evidence.


Resampling and Fidelity

Reply #21
You're the one that is making the exceptional claim, namely that a non-oversampled DAC can have frequency and phase response that is competitive with the best oversampled DACs. Therefore it is up to you to provide evidence that supports your claim.


???

Please provide a quote where I have claimed anything remotely close to that. Let alone anyone else in this thread.

 

Resampling and Fidelity

Reply #22
Mrrrh, you guys really want to argue this. We're all mostly agreeing anyhow. I'll believe that resampling produces a better reconstruction when I see evidence of it, otherwise I'm going to stick with the axiom that less modification is better.


I don't think anyone ever disagreed that "less is better except for when more is better" is true.  Its actually a meaningless tautology so it can't be wrong.
Way to burn that straw man down. Your "meaningless tautology" does not accurately represent what I said.

The underlying point which I think you keep missing is that "modification", "fidelity" and "pristine"  are not real things.
You are incorrect here, unless you take "real" to have some bizarre meaning that makes no sense. If I add 1 to 1, I get 2. 2 has been modified from 1 and is no longer 1. You can't say that there's no difference between 1 and 2.

Let's suppose the DAC is a black box. The purpose of the DAC is to produce an analog signal from a digital signal such that the two are as equivalent as possible. I don't care how that is achieved. I'm going to assume they're doing it right. Sure, they can use tricks like oversampling and whatnot here: they are dealing with the translation from the digital domain to the analog. I am not talking about this process. I am talking about the digital domain exclusively.

On the input side, it simply makes the most sense to send exactly the information that you want to read from the output to the DAC, not some alteration. If some alteration improved the behaviour of the DAC, I'd hope that the people engineering the DAC would have sense enough to do that for me. From a systems-engineering standpoint, they're expecting me to trust them enough to simply give them the data I want converted to analog. Second-guessing them with regards to resampling seems completely foolhardy. I have not seen a single point made in this thread that would make me re-evaluate this position.

Resampling and Fidelity

Reply #23
Is this a good time to mention that there are plenty of soundcards that produce audibly better results when certain audio clips are resampled at the input, or was this exception already made?  I'm not sure I'm following this discussion all that well.

Resampling and Fidelity

Reply #24
Let's suppose the DAC is a black box. The purpose of the DAC is to produce an analog signal from a digital signal such that the two are as equivalent as possible. I don't care how that is achieved. I'm going to assume they're doing it right. Sure, they can use tricks like oversampling and whatnot here: they are dealing with the translation from the digital domain to the analog. I am not talking about this process. I am talking about the digital domain exclusively.

On the input side, it simply makes the most sense to send exactly the information that you want to read from the output to the DAC, not some alteration.

Assuming this "black box" of a DAC does a decent job, yes, that makes sense. FYI: This does not apply to my old SoundBlaster Live soundcard. And if I recall correctly, the AC'97 chips were pretty bad as well for anything but 48 kHz. Quoting Wikipedia:
Quote
The DSP had an internal fixed sample rate of 48 kHz, a standard AC'97  clock, meaning that the EMU10K1 always captured external audio-sources at the 48 kHz, then performed a sample-rate conversion on the 48 kHz waveform to the output the requested target rate (such as 44.1 kHz or 32 kHz). This rate-conversion step introduced IM distortion  into the downsampled output. The SB/Live had great difficulty with resampling audio-CD source material (44.1 kHz) without introducing audible distortion. Creative addressed this concern by recommending audio-recording be performed exclusively at 48 kHz, and use third-party software to handle the desired sample-rate conversion, to avoid using the EMU10K1's sample-rate conversion.


If some alteration improved the behaviour of the DAC, I'd hope that the people engineering the DAC would have sense enough to do that for me. From a systems-engineering standpoint, they're expecting me to trust them enough to simply give them the data I want converted to analog. Second-guessing them with regards to resampling seems completely foolhardy.


You make "alteration" sound soooo negative. There's virtually no loss using a decent resmapling algorithm. But yes, there's also no advantage assuming your black box does a good job.