Nice explanation of DAC chips and their behavior
Your point is correct Jed, but it doesn't apply to the original poster's question.
If I understand correctly, he is considering purchasing a DAC that accepts a maximum of 16-44 because he'll only be listening to 16-44 content.
My advice was that a more modern, HD DAC would provide better conversion of 16-44 content, and yes, I agree that the software upsampling in the server is usually superior to upsampling in the DAC.Yep, it applies to the original poster's question.Yes, Paul, but as I refrained from pointing out in the previous thread where you intervened, suggesting that I was merely expressing opinions, you evidently have no understanding of the meaning of provenance where evidence is concerned.
The last substantive study of this subject was by Meyer and Moran, as I am sure you are aware, and showed no evidence of a panel of trained listeners showing any ability to distinguish between original 'hi-res' files and those passed through a16/44k1 throttle.
If you have something more to offer on this subject than hearsay or induction, I will be glad to examine your evidence, but thus far you have shown me nothing.
wakibaki, intentionally or not you're moving this thread to a place of subjectivity and controversy, and I would much prefer to keep the responses to the OP on solid math and engineering ground.
While trying to be thorough in my research and careful in what I say, I am no expert, so if there is anything in the following that is inaccurate I will appreciate corrections.
Let's go back to the dawn of the CD. The very first CD players used what are known as "brick wall" filters to convert the digits to music. There were audible problems with this, including aliasing and high levels of harmonic distortion. Very soon, even before the first separate DACs, what is variously called "upsampling," "oversampling," or more properly "interpolation" was used to avoid these problems. "8x oversampling" quickly became the industry standard, meaning that the DAC chip in the CD player or in separate DACs when they started being made first interpolated the 44.1kHz incoming sample rate to 352.8kHz before doing the digital to analog conversion. (This is why the discussion is relevant to the OP's question, since his DAC is overwhelmingly likely to be doing this internally.) Nearly all DAC chips do this internal oversampling of 44.1 material in three "rounds" of doubling - first to 88.2, then 176.4, and finally 352.8.
Now let's talk about the two "sides" of the resolution label. When you see something labeled as 16/44.1 or 24/96, the right side has to do with the sample rate. The left side is what's called the "word length." It's how many "bits" are available to denote the loudness of the signal. So 24/96 material coming from a studio theoretically has 8 more "bits" of potential loudness variation (i.e., dynamic range) available. Now there's controversy about how much of a difference these 8 bits can actually make, and I don't propose to get involved in that controversy here because it is irrelevant to the original question. I bring it up only because wakibaki, through either confusion or an imprecise use of terms, brought up the notion of "zero padding." As it is usually used regarding DACs, "zero padding" means simply appending zeros to the word length for purposes of processing 16 or 24 bit material in a 24 or 32 bit internal process. It makes no earthly difference whatever to the sound. It is the equivalent of writing 1 as 1.00000000 - no difference in quantity at all. That sort of process on the left side of the resolution figure is *not* what is interesting or relevant to the OP's question.
So now let's return to talking about the interesting side of the resolution figure, the right side, which is the sample rate. We were discussing the "8x oversampling" industry standard that was put in place even before separate DACs. What's so hard about multiplying something 8 times in a chip? Well, for one thing, it isn't multiplication. Interpolation, to use the more proper term, is done by means of filters that use math called Fourier transforms. (The interpolation does use zeros, though not at all in the same unimportant and completely innocuous way as zero padding of the word length.) The thing about Fourier transforms is that they have what are called "conjugate variables." As one of a pair of conjugate variables is optimized or becomes more exactly defined, the other becomes less optimized or less exactly defined. This is sheer mathematics - it's the way Fourier transforms work. In the case of the filters used to do interpolation, time domain properties of the filter (e.g., impulse response) and frequency domain properties (e.g., how good the filter is at removing frequencies above the cutoff point) are conjugate variables. It is thus mathematically impossible to optimize the filters used in virtually all DACs for both time domain and frequency domain response. This necessarily means all interpolation filters are the designer's idea of a good compromise.
Most people have no idea how much variability there is between interpolation filters used in different DAC chips. To give you some idea of this, here are impulse response tests of the same software manufacturer's filter using two different settings:
You wouldn't expect to find this level of relative variation in impulse response between two speakers, let alone in your electronics. (These test graphs are from SRC Comparisons , a very nice source of information about the performance of filtering software on various tests. Specifically, they are from the "steep no alias" and "intermediate phase" settings, respectively, of iZotope 64-bit sample rate conversion software, bundled with the Audirvana Plus music player and acknowledged to be among the best such software available.)
Now these graphs are from filtering software, which I've mentioned is considered to do a better job than the filters within DAC chips. There are two reasons for this. One is sheer computing power. Though the chips in the very first external DACs competed somewhat favorably regarding computing power with the CPUs in consumer PCs at the time, now the situation is very much reversed. The chip that runs your computer has a huge amount more computing power to throw at this filtering problem than any DAC chip. More computing power means an ability to run more sophisticated filtering algorithms that can do a better job of the inevitable compromise between time domain and frequency domain performance. The second is that the market for filtering software is very competitive in terms of performance, whereas the chips used in DACs are very much a commodity product, perhaps $5 apiece, and external standalone DACs represent a negligible slice of the market for these chips. One other consideration is that filtering software can do 8x oversampling with a single application of the filtering algorithm, in contrast to the three rounds of doubling used in nearly all internal DAC chips.
We therefore have some very simple, very practical choices in deciding where we will do our oversampling. Let's say our DAC is like most these days, and accepts a 176.4/192kHz input. If it's available, we could purchase a 176.4/192kHz hi res file, meaning any oversampling would be done at the recording studio on equipment potentially more sophisticated than PC software, and certainly more so than the filtering in the internal DAC chip. Or we could buy the CD and oversample in PC software to 176.4 or 192kHz resolution. The DAC chip would then use one round of its own oversampling filters to raise the sample rate to 352.8 or 384kHz. So we would have substituted either the studio's or the PC software's oversampling filters for two rounds of the filter in the DAC chip. If you have a DAC capable of accepting 352.8 or 384Khz sample rates at input, then you can avoid the internal DAC chip's interpolation filter entirely.
The way this relates to the original poster's question is that he asked whether there is a reason to have a DAC that accepts higher than 44.1kHz sample rates if all he is going to do is play CDs or files from CDs. The answer is yes, because:
1) Your DAC is internally oversampling to 352.8 or 384kHz.
2) This oversampling is not a process capable of mathematical perfection. It involves necessary compromises between time domain and frequency domain performance. These compromises can be done better in software or at the studio than in the $5 commodity chip in your DAC.
3) A newer DAC with available 176.4/192 or 352.8/384kHz input allows you to avoid some or all of the internal oversampling done by your DAC chip.
(There's still the sigma-delta modulator that has been in nearly all DAC chips for a long time, though not quite as long as 8x oversampling has been around; but this comment is way past long enough already.)