Jump to content

fdreed

  • Posts

    7
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Apparently Ted Smith does not track the source clock at all, nor use ASRC -no edges are ever looked at nor timed. It is some sort of high frequency input sampling with pattern matching and data extraction, to a buffer I would presume. Don't forget IIS... opticalRendu to IIS, that's the ticket! Don't even need a clock (IIS clock signals are ignored in the DS Dac).
  2. According to the engineer of the DSDac: no PLLs, FLLs, etc. that track any input, so jitter doesn’t transfer from the inputs to the output clocking. Yes, it is extremely difficult to control all the variables to compare inputs -I am unaware of any renderer that has all the outputs we are discussing. I used a TDLInk optical ethernet segment, ultraRendu, ultraDigital, and Aries G1, to discern my preference (I2S > AES = Coaxial > Toslink > USB), in my particular system, and I do understand it is just one man's opinion.
  3. I generally try not to discuss specific brands as the original intent of the post so often spirals out of control after that... however, I'm currently using the PSAudio DirectStream DAC. An interesting and probably little known feature of this DAC, is the possibility of sending it test files that the DAC confirms are received bit perfect. This feature is of great help when setting up different inputs and testing different cables, control software (Roon), etc.
  4. @barrows: Thank you for taking the time to respond, some great information comparing asynchronous USB to synchronous SPDIF. If we, however, focus on the only those DACs that treat all inputs asynchronously, I think we agree that having the master clock in the DAC itself is overall quite beneficial. If we can measure that we get bit perfect data to the I2S bus through all inputs and are using the DAC as the master clock on that data, then the only difference in SQ between inputs must be due to noise -either that presented to the DAC through the input cable, or that generated by the individual input circuits -> I2S. The important aspect to address in this situation is to minimize the noise coming to the DAC and then choose the input that adds the least amount as the signal is processed on the way to the bus. To move from the philosophical to the practical, when I render an optical ethernet transmission, I hear a difference in SQ depending on the input used on my particular DAC. I2S > AES = Coaxial > Toslink > USB. This is all with 3 figure ($$$) cables and no 4 figure ($,$$$) or more expensive cables. It could be argued that this particular USB input is poorly engineered, but it's the one I have. So I have no particular ax to grind against USB audio (though I may have over-generalized my own situation). It is just that, in my particular case, I would prefer an optical renderer with any output other than USB, with I2S as my best hope.
  5. @barrows I agree that the master clock should be in the DAC. If we limit the discussion to DACs that internally re-clock the signal from every input, rather than referencing an external clock, the inputs tend to equalize with respect to data transfer and jitter, and mainly differ in the nature and amount of noise transmitted. The USB cable is demonstrably the worst in this regard, hence the frequent pairing of a Rendu product with a USB cable that costs several multiples of the price of the Rendu. Said differently, you can achieve similar (arguably better) SQ with a much less expensive cable to any other input. My own tests have reinforced this opinion. The ultraDigital may indeed be an excellent and affordable product, but it still requires an expensive USB cable and an expensive LPS to perform optimally -there goes the affordable. I think it stands to reason -if you can eliminate one conversion, one power supply, and one cable, you will generate less noise -especially if it is downstream of the optical "filter". We are now at a point where bit perfect data transfer is essentially a given. As more DACs re-clock the bit perfect data themselves, the overall contribution of jitter to SQ should become less relevant to how we configure our systems -i.e. jitter becomes DAC dependent. That leaves controlling noise (EMI, RFI, ground loops etc.) as the final battle ground for the end user. Breaking the electrical ethernet connection with an optical segment seems like an excellent idea in this regard. Putting additional data conversions, power supplies, and cables past this "break" seems like going backwards.
  6. There is a least one other USB to I2S solution currently in the market place that does use USB power, but I don't see sending power down the same cable as the signal as an improvement, even though it does save the price of one high performance LPS. In addition, this approach still requires converting ethernet to USB first, which to me seems an extra unneeded noise producing step, and the added expense of yet another "hi-performance" USB cable. Is it that difficult to directly change fiberoptic ethernet to, say, I2S in one box (or is it optical -> electrical -> I2S)? I know there are products that do ethernet -> I2S, but they all require separate optical conversion. Is it the unavailability of an off-the-shelf conversion chip?
  7. I haven't seen it mentioned in a while, so I would like to reiterate a point others have made: I would love to be able to use an opticalRendu and would purchase one instantly if it had ANY output other than USB. It would be ideal if the USB circuitry was eliminated in favor of this other output. And no, I don't want to add another box and power supply downstream (like the ultraDigital). I know you are very busy fulfilling existing demand for the current version. I am just suggesting that you could make an additional killing with the non-USB version that has been suggested. Thank you.
×
×
  • Create New...