Jump to content

macbrush

  • Posts

    22
  • Joined

  • Last visited

  • Country

    Hong Kong

Retained

  • Member Title
    Newbie
  1. Mount the NTFS on the Mac using smb, then use arRsync The interface and options are quite self explanatory.
  2. Here is an example of a driver monitoring page of a real time Audio over Ethernet broadcasting system. You see that the clock is constantly being adjusted, jitter, Tx/Rx errors, tx/rx sequencing errors, under runs and overruns are also being constantly monitored. These values matters in a real time system because they contribute to the sound quality directly. The buffer values at the end represent the latency of this node for each channel.
  3. True that even in real time system, there will be some buffer due to the nature of modern switch design, and also how computer works. But in a real time system, jittering is a problem due to the fact that there is no error correction whatsoever, and most importantly, any overrun & underrun packet is discarded. And since the jittering also has an effect on the world clock which is running on the same network, sound quality is further affected due to that the internal clock of each node is being constantly adjusted to the master clock, down to 10µs accuracy in some system.
  4. I've bought a 3GB version and am using it as video/audio file backup. It's adequate as long as your computer is wired or on 802.11ac & capable of 1300Mbps connection. Don't even think about it if you're still on 802.11n. Also it gets quite hot when the HDD is in operation, so while occasionally backup should be fine, but I really wouldn't put it in heavy use.
  5. What confused me the most on this topic is that what kind of jittering we are talking about? Digital jittering will always be a problem if a world master clock is maintained through out a system, on professional AoE system, digital jitter is constantly being measured and monitored, which can be quantified, for real time monitoring 1000µs to about 10000µs is acceptable, and for real time compressed recording about up to 15000µs is doable. But for latency tolerable application, everything is being re-clocked and jittering really doesn't play any part in the sound quality unless it under-runs so much to the point that which is beyond the application's ability to adjust.
  6. I guess I should've made it clear on the last try-it-at-home part. IP Audio driver means a driver which is capable of multicasting audio channel(s) to your network, while an audio node means a devices capable of subscribing to that multicast group. Both ends should be capable of being the master/slave clock, and usually would do selection automatically. With a very simple setup, i.e. one channel with one receiving node, one can probably do away with QoS and IGMP without pulling too much resources on the switch. But such setup usually also require the switch is dot1p capable, otherwise everything would seems to work, but you get no sound at the end.
  7. There are AoE systems such as Axia's Livewire and Wheatstone's WheatNet-IP. Both rely on IGMP to establish multicast group, and a single master clock on each network. With a powerful core switch, and proper configurations, latency can be down to 1000µs, typically under 10000µs for real time application such as on air monitoring, packet loss at such rate depends on the receiving end, with proper configurations on a good NIC in a powerful system can be zero, but with default driver options which I found most manufacturer use, sometimes it can go as high as thousands per hour. However, when applying the same technology to audiophiles market, larger packets can be used, and we won't have to worry about packet loss, though latency will jump sky high, but then it doesn't really matter since we're not doing real time production monitoring. What I am trying to say is that even with today's state of art AoE technology, it still require professional implementation and premium investment, and I guess that's why it would be a hard sell for consumers. Not to mention most professional AoE technology are designed for broadcasting applications, and so are limited to 24/48. Its easy to try it out at home, if you know how. Get a IP Audio driver for your computer, stream a channel to your ethernet network, get an audio node as the receiving end, you also have options for the audio node to output analogue signals, or AES for your favourite DAC.
  8. I was an unbeliever, and thought USB cable didn't make any difference at all, of course I didn't bother to really experiment. Remind you that I am a network infrastructure specialist, and I really thought I knew everything about computer and networking, turned out that I didn't, not even close. There are many factors which would make a USB cable sound different. First one is obviously basic quality of the cable. We have many systems in the company that are very sensitive to cable quality, their driver design was much less tolerant than say an ordinary Windows 7 OS, even running on the same hardware. A lot of cheap cables, OEM cables, or whatever cables which works perfectly (on the surface) on a Windows machine, will not run smoothly on those systems, we got a lot of external hard drive disconnect, equipment reset, or soft reset in the logs to say the least. So that's the basic quality of a cable, and I assure you that most cheap $5 cable and OEM cables ain't up to that basic quality. I am not saying that you must get some $500 cable in this regard, but I am saying that you can't just randomly buy a cheap $5 cable, and say it would sound the same, not that an expensive cable must sound better, but you spend a better chance getting a good quality cable if you spend just a few more buck to get one from a reputable source such as AQ, in my experience. Now the basic quality of a cable is done. Another point is how you equipment was designed, and things like how much interference you get from your system. I didn't quite understand this until met up some electronic guys from where I work. For example, I have an Apoggee Duet, if you just listen to Duet's built-in earphones, you only require a good quality cable that passed basic requirement, because the design was so good that the headphone output was isolated from any ground loop and interference from the system, as long as the cable meet basic quality, it doesn't really matter. But if you use the line-out at the back, all sorts of ground noise, and system interference will sneak through and affect the analogue components of the next equipment in the chain, in my case the headphone amp, or the studio monitors. Here I did try everything to eliminate noises and interference from my system, but I found that even if I managed to minimised interference and noises from the system, the USB cable somehow will affect the sound signature, my guess would be that there was always some inaudible interference sneak through, and affect the analogue components by means of harmonics. Since my native language isn't (obviously) English, so please excuse me if I didn't express myself clearly. Please also correct me if you think I am wrong, I always looking for to learn more. Cheers Kenneth
  9. As a network professional, I can't say whether ethernet cable makes any sonic differences, but they do come in different quality especially their RJ45. Normally a properly wired connection shouldn't produce any errors, but once in awhile there would be random I/O errors, CRC errors, or resets, very occasionally even to a point which triggering spanning-tree change path, or drop link from an etherchannel. Although we do test all of our cables before deploying by a simple cable tester, we also put our cables in load test for a week making sure there is absolutely no errors before placing them in any critical points. Maybe you should test the cables thoroughly before putting them to audition to avoid any technical issues affecting the results. Just a suggestion. Cheers Kenneth
  10. Music is music, to be honest, even live music is often "not that good" due to acoustic, equipment or not ideal seats... etc. I only download high resolution (more expensive!) if its a high quality product in terms of recording, mixing, and mastering. But then if a good piece of music is only available in iTunes store, I wouldn't hesitate to download.
  11. Not only that it is a bad idea to run low voltage cable parallel to a power cable in technical point of view, it is also illegal in some countries. You should check the local regulation.
  12. If I understand this correctly, you have store all your files in your NAS, and it is serving files to your CAPS on the network? If that's the case, just use a PCIe wireless card, or anything you like. The NAS is only serving files through a wireless data network, once the packets arrive the AP, they get switched, meaning buffering and such, then pass on the your CAPS via ethernet, so whatever noise produced on the NAS end won't affect your CAPS at all.
  13. I use near-field monitors, so my setup is an equilateral triangle, 2m each side.
  14. Frankly, unless you manage to build yourself a Pentium III box or something like that, USB loading on the CPU nowadays is really minimal, if ever noticeable, same goes for WiFi adaptor.
  15. I guess USB will be the future. Its cheap, easy to implement and fast. Thunderbolt is a no go unless Intel change its conditions for licensing, which require a video signal. Firewire is fading out, the strongest support for firewire had been cameras and camcorders for its speed and reliability, but the market has shifted and new models only support USB (which is faster and cheaper!) nowadays with very few exceptions. As for ethernet or other options like fibre optics networked DAC, they are possible, but would be extremely expensive since one would need an specialised network port on a computer for DAC only running some special protocol or a DAC need to has all the network capability like a computer built in, which also defeat the purpose of having a external DAC anyway. Just my 2¢
×
×
  • Create New...