Jump to content

matan

  • Posts

    9
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie
  1. Is your first box a dedicated ripper? Is it then connected to the rest of your network? The first box has the hard drives with the audio files, the database, and the network I/O for both your home LAN and the remote controls. It also has a fiber connection to the 2nd box. The 2nd box has a custom operating system in firmware that streams data from the fiber network connection and sends it to the DAC as AES/EBU with minimal jitter and interference. CDs can be ripped (or music otherwise transferred) from a Mac/PC or from a dedicated ripping device over the home LAN to the first box. Is that because there is simply not many networked high end products yet or because direct connection is thought of as a better way to do digital? It's probably a combination of both; there are less streamers and less experience with them. I'm not of the elk that thinks that one approach is better than the others, so I can't say that a direct connection is always better. The hybrid stream/direct connection worked best for me; that doesn't mean it's universally so. Matan
  2. Having a battery PSU for a typical computer isn't going to help very much. While in cases like George describes it can reduce grounding problems and maybe some RF/EMI that comes from the power line, we have to remember that (virtually) all motherboards have their own set of switching PSUs on board - the power regulators that power the CPU and chipsets all take the voltage(s) output by the mains PSU and further convert that to a myriad of very precise voltages. So, while we can try to power a Mac Mini from a battery, the end result wouldn't be much different than any common laptop running on its battery. There are ways around this of course, but they get very complex and very expensive VERY quickly. Matan
  3. Hi, I concur with Chris on this one. The differences between implementations far outweigh the differences between topologies. I really wouldn't get hung-up about one approach being the panacea that triumphs all others; rather, I'd diligently seek something that works well for you and in your system. Perhaps designers are less experienced (or under-budgeted) to explore ways to improve Ethernet streaming. Ethernet poses a different set of challenges beyond the 'traditional' ones related to direct (synchronous) connection and for which there is a large body of experience and documentation. For example, in addition to the physical interface, Ethernet streaming is susceptible to delays introduced by common network switches as they typically use a technique called "store and forward" which means the switch adds a layer of data buffering between source and destination endpoints. There are ways around this, of course, but that's a whole new can of worms and is not universally compatible. There are, of course, aspects that are universally important: clean stable power, isolation, RF immunity, and jitter minimization. Those should really be your starting points when you evaluate one option over another. Your ears should be the arbiter. For example, I evaluated a few approaches when designing my music server. Having the Ethernet stack chipset very close to the DAC didn't end up as a favorable option for sound quality, so direct steaming was out. The topology we eventually settled on was a hybrid approach where one box streams (over a dedicated fiber-optic link and protocol) to another, with the second box then sending AES/EBU to the DAC. In general, isolating the audio storage and metadata database from the DAC is helpful, as is having the least amount of computational power as far away from the DAC and clock. Of course, these two are somewhat mutually opposed –- welcome to the world of audio, my friend, where everything matters and everything is a compromise :-). Best, Matan p.s. I hope to see you at CES; who is going?
  4. Hi wgscott, You'd be surprised, but Apple actually use Intel's reference designs for all their Mac computers, and whatever optimizations they put in are typically geared towards optimizing the user experience (GUI, disk performance, etc). This doesn't equate to better sound quality, and usually actually compromises it because all the other computer subsystems get higher priority than the sound drivers and hardware. To their credit (and I’m hardly the guy to praise Microsoft), Microsoft have gone to great lengths to improve the sound quality in Vista and even more so in Windows 7. Unlike Apple, they’re actually trying. Admittedly Apple's use of a Unix-based OS bore a significant advantage in SQ over early Windows, however parity was achieved somewhere around Vista SP1. Both Windows and OSX use the microkernel system architecture where all the hardware drivers sit outside the core of the operating system. This certainly has many benefits as far as system stability and compatibility, but it isn't ideal for sound quality because the computer circuitry has to perform many context switches between things inside and outside the kernel. linux, on the other hand, uses a monolithic kernel architecture. Carefully tuned and executed, this architecture allows for better realtime processing and fewer interruptions to the flow of the bits – and because it's linux, you get to choose what (and how) you optimize. It also means the entire kernel can blow up in your face, but that's what testing is for and music servers should operate like toasters – with minimum interference as they go about their processing. Apologies for the geek-speek :-). Enjoy the music, Matan
  5. ldolse wrote: [...] an Ethernet DAC would require several things AES, SPDIF, Firewire, and USB will never require: - A TCP Stack - An 'operating system' to participate on the network - 'Applications' running on that operating system Well, yes and no. There are many SoC implementations of an Ethernet PHY + stack available today in a very small, highly integrated package, really no different than the USB/Firewire chipsets. I really think that we've reached equilibrium there, with the differences (again) boiling down to implementation and engineering. I do not believe that any one interface is always inherently better than the others (and have I've spent a lot of resources testing many different interfaces / implementations / topologies in my manic quest for 'the best'). Each has its advantages and disadvantages, and is more or less suitable for specific design requirements, but at the end of the day a well-engineered system using an 'inferior' interface will sound better than a poorly-engineered one that boasts a 'superior' one. Quoth the old analog engineering adage: "Everything matters" and so it is such in the digital domain. Our only hope is to corrupt the signal as minimally as possible. Matan
  6. Hi Clay et al, Clay wrote: "For the one-box proponents, one is perhaps better served with something like the super isolated boxes, e.g., the Matan server." I totally second John Swenson's philosophy and arguments in favor of the three box design. In fact, my server is indeed a three box gig: one box for the media/gui/processing, one for the DAC and one super lightweight, realtime, no moving parts, etc box that streams the Ethernet data to the DAC at maximum quality and while being locked to the DAC's clock. I truly believe, based on a long series of experiments, that this is the optimal path to the best sound quality. The topology that I've chosen galvanically isolates the DAC/streamer from the storage/processing components (via a non-Toslink optical connection), and attempts to maximally isolate everything mechanically, electrically, and thermally. The one box system that Magico showed at CES was not the top-of-the-line version of the technology I've been working on, but rather a prototype that was meant to accommodate the Mykernios card running on Windows. The "proper" version of my server (as shown during the CA Symposium exactly a year ago) is the three box design and running Linux. Eloise, I totally agree with you that having the CPU circuitry to decode the Ethernet stack in close proximity to the DAC chips can be rather undesirable as far as audio quality is concerned. I also want to stress that not all "bit perfect" output is created equal - even if the DAC turns on the HDCD LED, it doesn't mean that no additional noise has entered the circuit via the digital audio interface, or that the PLL circuitry inside the DAC didn't miss a sample. Looking at the HDCD spec, one can see that it is quite tolerant of signal corruption, and even leaves the HDCD light on for at least 1/10th of a second (almost 4,500 samples at Redbook!) after losing the HDCD bitsteam in the input signal. As mentioned during the CA Symposium, bit perfect is a stepping stone rather than the goal towards achieving audio nirvana. We should certainly appreciate its necessity, but shouldn't rest on our laurels once we achieve it. There is still a ways to go. Matan
  7. Thanks, Chris, for echoing my sentiments. I couldn't agree more, and most of the Ethernet DAC designs today are based around the traditional DAC model with an Ethernet front-end. It certainly works, but as you said there is still a ways to go regarding SOTA. To you, ZiggyZack: [...] but how can you be sure it's just cpu power which makes a difference? And what aspect of cpu 'power' is problematic - raw gigahertz? Operations per second? Some other measure? It's not CPU's computational power that makes the difference - it's the stuff around it. Faster CPUs have more transistors which draw more current, and they need faster chipsets, which means more frequency dividers, more current for those chipsets, and higher EMI. The higher frequencies also mean that more energy is radiated from the copper traces on the motherboard because the copper ends up being somewhat of an antenna at gigahertz frequencies. The higher TDP (heat) causes an increase in the overall temperature of the system, and that increases resistance and entropy (noise) - so you need even more current. More power = more noise = more heat = more cooling. Noise = bad. Heat = bad. Cooling = bad. It's a vicious cycle and as you said I only touched the tip of the iceberg, but the point is that decoding and displaying 1080p video requires MUCH more computing power than anything 2, 5.1 or 7.1 channels, and, by analogy, you don't fit a Ferrari engine to a scooter just because you can. Your comment about mechanical HDDs is somewhat irrelevant - all mechanical hard drives spin at a constant angular velocity. Some designs (WD in particular) do change the rotation speed but that is only within a few discrete speeds and to accommodate different loads. Besides heat and vibration, the main problem with mechanical hard disks is the current transients that happen when the armature moves across the platters - the mechanism is quite similar to a speaker transducer -- it is even called a "voice coil" -- and that can induce undesired electrical noise. Hence, SSDs tend to introduce less noise, although their memory access and control logic can also harm the sound. That said, any decent design should be able to decouple the storage from the playback, as is done by a variety of products, both current and pending. As you mention, current consumer operating systems are not ideal for audio reproduction because they are non-deterministic. The multiprocessing architecture allows (for example) the defragmenter to 'wake up' and start toiling, possibly in the middle of your listening session. Highly undesirable. The aerospace and medical industries have this down pat - by designing preemptive, real-time operating systems... and even then they screw up some times! Finally, I totally agree with Chris about a computer-based system being able to smoke any other design. Put it to the test, you won't look back! Enjoy the music, Matan
  8. Chris wrote: I could not get the 9632 to change sample rates on the fly. Under ALSA (on Linux), the RME cards don't switch sample rates on the fly. You can try to use external clock to change sample rates, but then again that isn't 'on the fly' either because your clock source must change over somehow. IMHO, there is good reason for this - if there are any samples left in the card's buffer while the sample rate is changing, you might get some undesired output from the DAC. These cards are designed for professional use, where sample rate changes would be few and far between - and never on the fly.
  9. ggking7 wrote: Have you heard of anyone testing different software or hardware configurations to determine if low power consumption and high sound quality correlate? I tested this rather extensively about 18 months ago. Based on my experiments, low power wins over high CPU speeds every time when it comes to audio. The average contemporary CPU can handle all of the processing required for audio playback in its sleep :-). Having lots of GHz handy does nothing good for you, as it draws lots of current and generates lots of heat. Both are bad news; generating a high-current, low-noise power supply is much harder than a low-current, low-noise one. Heat isn’t our friend either: you need to get rid of it, and fans are bad for audio and electrical quality (just ask the listeners of the 2nd Computer Audiophile session last year, we demonstrated the same server with and without a spinning fan – and it made a huge difference). The CPU power-scaling features are a mixed blessing for audio applications, as they only partially reduce TDP. Power over Ethernet is a nice concept but it’s not a good idea for pristine audio quality, because the power supplies inside the network switches are typically cheap (read: not very quiet) and the Ethernet cable used isn’t typically shielded. I would go with a quiet linear or switching power supply (one with a high switching frequency would be better).
×
×
  • Create New...