Jump to content

dpacella

  • Posts

    7
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie
  1. Anyone know of any state of the art DAC chips that has a digital buffer (RAM) for the input words? I imagine timing errors due to clock skew would go down drastically, since the conversion clock and memory clock are not only on the same clock tree, but in the same silicon (drift from temperature variations is the same). I guess I could look at some data sheets but i'm being lazy this morning. It seems to me the issue of interface jitter would become irrelevant so long as the other flow control pieces (for instance, between the s/pdif chip and this DAC with internal buffer) are engineered correctly for streaming audio applications. Thoughts?
  2. Cbarrows, I am totally envious. I love that area of the country. I proposed to my wife at the Stanley Hotel near estes park....it was spooky awesome I used to work in Longmont when I was with Maxtor Corporation (we were based out of Shrewsbury Mass, and would go out for weeks at a time) I was working on the read/write design team optimizing transfer rates in high end SCSI drives. regards, Dave
  3. @barrows: wow you take our OCD prone hobby to a whole nutha level and that is very intersting that you notice audio changes (for the worse of course) with ambient RF present. I also do, to a point, care about electrically clean environment for my gear but I would not even bat an eye regarding efforts in trying to eradicate my house of RF. Whatever the effects, it's something I can happily live with. I would be sad without WIFI!. However, I stand by my statement that, from an engineering perspective, the canned WIFI radios will cause little, if any measureable (dare I even go there) disturbances on the conversion timing of the D to A process. Any RF signals (from a single transmitter or mixed from various different frequencies) riding on PCB traces, speaker cables, interconnects, voice coils , will be so far down in the noise compared to the active signal, not to mention shifted so far above nominal human ear/brain frequency response that it really is a struggle for me to conceive of how it could be an issue whatsoever. Perhaps RF is infiltrating your psychoacoustic experience? There are some cases where folks are proven to have physiologic reactions to RF fields and waves. No Joke.
  4. this is an excellent website. Thank you (Chris) for putting this together and maintaining it so well!! As for PC based music servers finally being legit high end front ends, I think we are entering a fertile growth period, and the async USB seems to be one powerful enabler of this. I also think there is much promise in network based front ends as well. Perhaps the (near) future high end DACS will only have: wifi radios, ethernet, and USB, with the occasional RCA (!) connector for legacy s/pdif transports.
  5. Most modern homes have numerous RF transmitters and receivers. These signals travel through drywall, wood, etc. Therefore, regardless of wireless vs copper streaming network audio device in your rig, your rig is under constant exposure to the HF electromagnetic energy. I guess if one were so inclined turning off ALL sources of RF in one's home would put to bed possible image smear resulting from potential RF fratricide. Now, as for devices with built in WIFI radio receivers, these have antennas which receive the energy (no harm here), as well as active local osciallotor and other RF LNA front end components. I suppose if the integrated radio receiver were poorly designed, one might be able to measure considerabl RF leakage from the active front end radio components, but since these have to be FCC approved, they must NOT radiate appreciable amounts of RF energy...in other words the manufacturers who design, build, and test these integrated WIFI radios have run tests for EMC compliance. As for me, personally, I have zero concern that the use of WIFI equipped network audio devices (squeezebox. etc) pre-disposes the resulting audio data both pre DAC and post DAC to any distortions. Packet timing, FIFO flow control, DAC clocking etc. are another story.
  6. Hello...I understand that running the USB interface asynchronously has a major advantage over source controlled clocking, namely that in async mode the downstream device can precisely control clock skew as it relates to data alignment and D/A conversion. The data coming off the input buffer prior to the DAC is the same clock that handles flow control over the USB. Some hardware process monitors the buffer and asks for more data, ensuring the pipe is full as long as the audio data is streaming. Timing uncertainties are very low, as the skew is well understood and the conversions are happening in the right places. In synchronous USB the upstream master (the PC) clocks this process. The way I understand it, this is so because the DAC sample rate and USB packet timing are independent. It is in this clock domain crossing where timing errors in conversion become a factor. Network based transfer (squeezebox, transporter, etc) seems equally well positioned in the battle against timing uncertainties. in the squeezebox, for example, there is a 64 MByte RAM that is monitored by hardware flow control in the squeezebox. This ensures that the buffer before the DAC is always full (this buffer provides the elasticity between the data rate coming off the server, and the data going out for D/A conversions). It seems to me that here we have another example of optimal timing; The same clock that drives the data out of the buffer is the same clock that drives the D/A converter. If the same DAC, filters, and analog circuitry were identical between the async USB front end and the network front end it seems to me we could expect identical audio performance. What do you think?
×
×
  • Create New...