Jump to content
IGNORED

Understanding USB


Recommended Posts

23 minutes ago, Ralf11 said:

"implied" ?? do you mean determined?

 

so is there any jitter or not?

I mean that if the sample rate is fs, then sample number n corresponds to time t = n / fs seconds from the start of the recording. Counting samples is trivial and not subject to jitter.

 

23 minutes ago, Ralf11 said:

and what about noise & galv. isolation, loop currents?

That's a separate issue. It is absolutely possible to create such problems, as I have demonstrated.

Link to comment
15 minutes ago, unbalanced output said:

Sorry if I oversimplified. The time pace is indeed defined by the master device, whichever side it is placed. This clock dictates the pace of the data stream. Each data package can carry at most 1kb at 8kHz (for XMOS at least).

At 48 kHz sample rate, each data packet carries on average 6 samples. If the host clock is a bit a fast, some packets will have 5 samples. If it is a bit slow, some packets will have 7 samples. With 32-bit samples and two channels, this comes out to 48 bytes. At 384 kHz, it's a whopping 384 bytes. Nothing to be concerned about.

 

15 minutes ago, unbalanced output said:

The data is then broken down into higher frequency/individual samples using the same clock reference! Therefore it doesn't make much of a difference whether the data is sent in packets or not, they're simply broken down further down the line from the buffer - actually the only guarantee is that it doesn't get any better downstream.

That doesn't make any sense.

 

15 minutes ago, unbalanced output said:

Perhaps what you're not considering is the fact that the clock is (typically) not regenerated or check past the buffer - the data may be used in the correct clock cycle or not, there's no guarantee for that.

The clock is typically downstream of the USB receiver, directly connected to the I2S transmitter and the DAC chip. The USB receiver decides through unspecified means when to request more or fewer samples per frame. One possibility is to trigger this when the FIFO level crosses certain thresholds.

 

15 minutes ago, unbalanced output said:

Agree in the point that latency is not an issue as long as it is constant, however that is not the case in synchoronous or adaptive modes. In asynchronous mode it is indeed minimised, however the processing delay is still variable.

You appear to have the different modes confused. Let me clarify:

  • Synchronous: the DAC uses a PLL to recover the clock from the USB (micro)frame arrival times. Nobody uses this any more.
  • Asynchronous: the DAC has a local free-running clock and uses an ASRC or simply hopes for the best. Nobody uses this either.
  • Adaptive: the DAC adjusts the requested number of samples per frame to compensate for drift between its local clock and the host clock. Everybody uses this mode.

All three modes of operation use USB isochronous transfers.

Link to comment

People have to break the illusion of 1s and 0s, or they'll never get it. If you accept digital purity dogma, it's simply too counter-intuitive to understand the effects of noise in a playback system.

 

Back during the TOSlink era, no one knew why quartz glass cables sounded better than cheap plastic ones. SPDIF is digital, the bit either arrived or it didn't, what difference could the conducting material make? Well, we now know the issue with plastic cables is that they didn't have enough bandwidth (plus shortcomings in the SPDIF standard, internal reflections, etc) which lead to jitter issues. 

 

USB doesn't lack for bandwidth, and modern DACs have basically standardized a system for high-quality clock regeneration with the proliferation of very high quality VCXOs. Higher end DACs even use TCXOs. Jitter in the modern era is more-or-less a solved problem. The issue with USB is a problem that TOSlink was largely immune to -- electrical noise. SPDIF controllers still produce self-noise, but it's less of a concern due to SPDIF controllers working at a much lower speed compared to USB controllers. There is a growing awareness in the industry that DACs MUST have galvanically-filtered inputs, and you see a growing number of new products coming out with this feature, but it's still not the norm. Also, galvanic isolation transformers still leak some noise; they attenuate the noise rather than eliminate it.

Link to comment
7 hours ago, beerandmusic said:

I agree with all of this, provided you also agree that the binary file is recieved with 100% accuracy.

 

Definitely. If data is not received 100%, computer is no longer reliable. Better to send to a recycling plant.

Link to comment
4 hours ago, clipper said:

.  But the accurate timing of the arrival of that data matters when you're playing back music in real time.  Accurate timing doesn't matter when you're copying a Microsoft Word file.

 

Accurate timing and clocking is absolutely important when copying a digital data file...but that is all part of the engineered solution.

There is NO difference in transferring a digital file of a spreadsheet or a digital file of music...it is just a binary stream and needs to be transmitted, buffered, and processed to ensure 100% accuracy regardless of what the data file is.  It is just a binary data file until it gets converted by the DAC.

 

Link to comment
18 minutes ago, GUTB said:

People have to break the illusion of 1s and 0s, or they'll never get it. If you accept digital purity dogma, it's simply too counter-intuitive to understand the effects of noise in a playback system.

 

It's not an illusion. It's certainly quite real. What people have to "get" is that other factors affect the conversion process and alter what comes out the analog port.

Link to comment
2 hours ago, unbalanced output said:

 

In many DACs, the timing on this data stream determines also the cycle of DAC update - the process is greatly improved if using the most accurate clock in the chain as a source (either at the transmitter or at the receiver). If there are packets lost or delayed, there may be issues if the data is not buffered and interpolated in the DAC.

 

Ok, so you are saying that the transmission of the digital stream is handled differently between a dac and a thumbdrive.

Why is this so?  Why don't DACs buffer and process the digital data the same way as a $5 thumb drive, and ensure 100% accuracy before the D-A processing even begins?  Are most of the DAC engineers idiots?  Why wouldn't a DAC be engineered to ensure proper transmission prior to processing if a $5 thumb drive can ensure 100% accuracy, i would think a DAC engineer would have enough intelligence to properly design the circuity to establish bit perfect sampling prior to doing anything else?

 

 

Link to comment
1 hour ago, mansr said:

Since latency isn't important for music playback, the buffer size can be set high enough that underruns simply don't occur. 100 ms is usually plenty.

 

It is of course possible for a data packet to be corrupted, e.g. from noise in the cable. If this happens, it is up to the DAC whether to fill in silence, try to interpolate, or simply drop the packet. With proper cables, this is so rare an occurrence that it is of no consequence whatsoever.

 

Finally a real engineer...thank you.  This i believe is the main point of what I am trying to say, but I just don't know the engineereing, but it seems to make sense that the buffer is the key, and I would think that in this day and age of DAC engineering, we are way beyond basics, and this should not be an issue. 

 

I want to revisit a non-music data file transmission.  My guess is that if the checksum isn't correct in data transmission, the processor resends the packet so it is correct.  Why can't transmission of music file do this?

 

Assuming transmission of a music file can't be done iin same manner, what do you suggest is a "proper cable"?  I mean are you going to suggest that one needs to spend $200 on a cable that meets USB spec???

Link to comment
4 minutes ago, beerandmusic said:

 

Ok, so you are saying that the transmission of the digital stream is handled differently between a dac and a thumbdrive.

Why is this so?  Why don't DACs buffer and process the digital data the same way as a $5 thumb drive, and ensure 100% accuracy before the D-A processing even begins?

 

Because you as a user want the music to start playing shortly after you press the play button. And because if music files were copied to the DAC (or streamer) instead of being streamed, you would have no reason to buy reclocker, regenerators, decrapifiers, etc. And because when it all started file data transfer was not as fast as today and flash memory was not so cheap.

Link to comment
1 hour ago, mansr said:

 

Reliable transfer of audio data over USB is a non-issue. Apparently some DACs are susceptible to noise somehow coupled over the USB cable. This has nothing to do with the data transfer and its timing.

 

Because USB data arrives in bursts, all DACs, even the old synchronous ones, must have a local buffer.

 

AMen +1

 

Do you agree that even a "noisy pc" will not be an issue provided you are using a decent dac (e.g. gungnir w/usbgen5 & audioquest forest cable)?

 

 

Link to comment
15 minutes ago, beerandmusic said:

 

Ok, so you are saying that the transmission of the digital stream is handled differently between a dac and a thumbdrive.

Why is this so?  Why don't DACs buffer ...

DACs do buffer, of course. But, under normal conditions, they start converting before the data transfer terminates and they do not store whole files or file collections (albums, playlists) in a local memory. They are designed to consume a stream, not to store files for later conversion.  

Link to comment
10 minutes ago, nbpf said:

Because you as a user want the music to start playing shortly after you press the play button. And because if music files were copied to the DAC (or streamer) instead of being streamed, you would have no reason to buy reclocker, regenerators, decrapifiers, etc. And because when it all started file data transfer was not as fast as today and flash memory was not so cheap.

checksums I am sure are done in nano or at least micorsoeconds..i can certainly wait a few microseconds...as mansr stated buffering has to exist....it is part of the design...when you are talking microsecnds, no one will know they are waitiing, believe me.

Link to comment
3 minutes ago, beerandmusic said:

 

AMen +1

 

Do you agree that even a "noisy pc" will not be an issue provided you are using a decent dac (e.g. gungnir w/usbgen5 & audioquest forest cable)?

 

 

 

The trouble with your thinking is that in terms of what's really important for optimum SQ there are very few "decent dac"s - yes, they all meet the spec's and measurement wise they are brilliant ... but that's not the same thing as being 100% robust - note, not using the word "accurate" here - in terms of generating a precise analogue representation of the music digital data. Dismissing the issue as something that engineers and designers should have under control is not helpful in appreciating the situation ...

Link to comment

1. Different protocols. Thumb drives use bulk transfers, audio isosynchronous. You don't need isosynchronous transfer if the stream is generated locally, but it's a different thing (and subject to different types of noise which is what the physical separation of the stream generation and dac propose). I'm most saying that this doesn't work fine, I'm just saying that both transmission modes are quite different things. 

 

2. A decent DAC in this case is one with heavy isolation, preferably at different levels. There are quite some well respected and expensive DACs that benefit a lot of a fine source, regardless of what the manufacturer says. Galvanic isolation and optoisolation are examples - Schiit themselves make an usb to specify converter which does the optoisolation job - there must be a reasonable for it, doesn't it?

Link to comment
1 minute ago, fas42 said:

 

The trouble with your thinking is that in terms of what's really important for optimum SQ there are very few "decent dac"s - yes, they all meet the spec's and measurement wise they are brilliant ... but that's not the same thing as being 100% robust - note, not using the word "accurate" here - in terms of generating a precise analogue representation of the music digital data. Dismissing the issue as something that engineers and designers should have under control is not helpful in appreciating the situation ...

 

Well that is my point of the thread.  That we are here today with technology (albeit extremely slow to the gate), where with proper DAC engineering of buffering, isolation, that the hardware source transmitting the binary file should no longer matter.

 

Link to comment
10 minutes ago, nbpf said:

DACs do buffer, of course. But, under normal conditions, they start converting before the data transfer terminates and they do not store whole files or file collections (albums, playlists) in a local memory. They are designed to consume a stream, not to store files for later conversion.  

not according to what MANSR said, and i tend to believe him that whatever size the buffer is, is sufficient.  I would have a very difficult time believing that dac engineering is so premature, that they are not properly buffering.

Link to comment
2 minutes ago, beerandmusic said:

checksums I am sure are done in nano or at least micorsoeconds..i can certainly wait a few microseconds...as mansr stated buffering has to exist....it is part of the design...when you are talking microsecnds, no one will know they are waitiing, believe me.

No. It takes significantly more than microseconds to transfer and checksum a whole hires album through a USB 3 or through a Gbit LAN connection. Beside which, as I suggested, there are economical reasons that speak against DACs with massive local memory and replay from memory. Not to mention the problem of internet streaming of DRMed data! Also we like to play around with convertors and reclockers, don't we?    

Link to comment
6 minutes ago, unbalanced output said:

1. Different protocols. Thumb drives use bulk transfers, audio isosynchronous. You don't need isosynchronous transfer if the stream is generated locally, but it's a different thing (and subject to different types of noise which is what the physical separation of the stream generation and dac propose). I'm most saying that this doesn't work fine, I'm just saying that both transmission modes are quite different things. 

 

2. A decent DAC in this case is one with heavy isolation, preferably at different levels. There are quite some well respected and expensive DACs that benefit a lot of a fine source, regardless of what the manufacturer says. Galvanic isolation and optoisolation are examples - Schiit themselves make an usb to specify converter which does the optoisolation job - there must be a reasonable for it, doesn't it?

Ok thank you for this....this exactly matches my point.

 

Assuming a DAC using optoisolation, that the source hardware transmitting the digital file really is not of consequence.

 

Link to comment

^^^^ To clarify my position and purpose of the thread, I am of the opinion that if I use a standard I5 pc running windows 10 , that if I use a "modern dac" with "optoisolation" that the brand of pc, or "noise" of my pc, or operating system is of no consequence.  That specifically, my ASUS vivo pc with standard power supply running windows, can transmit the binary file perfectly, and whatever noise is on the usb bus, that the modern dac can process out any noise such that it is not of concern.

 

 

Link to comment
8 minutes ago, nbpf said:

No. It takes significantly more than microseconds to transfer and checksum a whole hires album through a USB 3 or through a Gbit LAN connection. Beside which, as I suggested, there are economical reasons that speak against DACs with massive local memory and replay from memory. Not to mention the problem of internet streaming of DRMed data! Also we like to play around with convertors and reclockers, don't we?    

who said anything about a whole album...there is no need for that...you can just buffer a few packets, or whatever is necessary to ensure 100% accuracy....Whatever method is used for EC need not wait for an entire album..that would be ridiculous.

Link to comment
8 minutes ago, beerandmusic said:

 

Well that is my point of the thread.  That we are here today with technology (albeit extremely slow to the gate), where with proper DAC engineering of buffering, isolation, that the hardware source transmitting the binary file should no longer matter.

 

 

Yes, I agree entirely. So, we have 2 approaches:

 

1) Treat the DAC as being extremely fragile - the slightest flapping of butterfly wings on the other side of the world causes palpitations in the circuitry, affecting the sound.

 

2) Make the DAC incredibly robust - plug a working, heavy duty arc welder into the same mains circuit and it should have zero impact on the SQ.

 

I favour approach 2), but the world is largely running with 1) ... ^_^

Link to comment
2 minutes ago, beerandmusic said:

Ok thank you for this....this exactly matches my point.

 

Assuming a DAC using optoisolation, that the source hardware transmitting the digital file really is not of consequence.

 

 

True, but only if the DAC works in adaptive asynchronous mode, per mansr.  They  communicate with the source driver that adapts the rate of transmission into the DAC buffers rather than the rate of d to a conversion at the DAC outputs.  Not to worry, though.  Almost all ADCs designed for computer audio work that way these days.  If they do not, run the other way.

 

Link to comment
16 minutes ago, beerandmusic said:

^^^^ To clarify my position and purpose of the thread, I am of the opinion that if I use a standard I5 pc running windows 10 , that if I use a "modern dac" with "optoisolation" that the brand of pc, or "noise" of my pc, or operating system is of no consequence.  That specifically, my ASUS vivo pc with standard power supply running windows, can transmit the binary file perfectly, and whatever noise is on the usb bus, that the modern dac can process out any noise such that it is not of concern.

 

 

If you have a little extra budget, try a Regen or isolator type of device between the computer usb and the DAC and try for yourself. You may be surprised with the difference.

Link to comment
12 minutes ago, beerandmusic said:

not according to what MANSR said, and i tend to believe him that whatever size the buffer is, is sufficient.  I would have a very difficult time believing that dac engineering is so premature, that they are not properly buffering.

The DAC buffer is quite small, no more than 100 ms. Pro gear used for live effects has a latency of only a few milliseconds. The buffer only needs to hold a few packets to do its job.

 

Because isochronous mode is intended to provide a constant, low latency, there is no retransmission on error. Bulk mode has retransmission but guaranteed latency or bandwidth.

 

DACs use isochronous mode since the guaranteed throughput and latency is more important than perfect delivery. Remember, actual packet errors are very rare, and the only consequence, should one occur, is minor annoyance. Under normal usage, you're unlikely to encounter one during a year of listening. A storage device has the opposite requirements. Even a single bit in error can have dire consequences, but nobody notices if copying a file takes a few milliseconds longer. This is the situation USB bulk transfers are intended for.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...