Jump to content

gregs1104

  • Posts

    11
  • Joined

  • Last visited

  • Country

    United States

Retained

  • Member Title
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes. I've done that. Every comment I've made was with the entirety of that context. I only mentioned the SQL stuff I do to explain for why I know all the tools for Linux to look at the problem. I'm also using Linux for playback. This week I'm playing on the i3-6100U of my Gigabyte NUC-clone with Jriver MC21 on Ubuntu 16.04. Right now I'm seeing about 13% CPU usage decoding 24/192 files. I'm also transcoding single rate DSD files to send to the DAC at 24/96. That spikes CPU usage to about 33%. (Note that the latest MC22 from them switches to the SOX library for upsampling, so these numbers aren't reflective of the current generation product) The numbers aren't high enough that I'm personally worried about network overhead, but I wouldn't call them trivial either; only 16/44.1 playback is trivial. With some people using Raspberry Pi systems for playback and others trying to do double rate or higher DSD, they're easily going to hit CPU levels where memory and CPU cache contention becomes an issue. I personally am heading toward real-time speaker correction at 24/96, and one reason I dug into this is to handicap the odds I will be able to do it with just this mini i3-6100U system. Here's a picture of the DSD transcoding example to show I'm not just pulling numbers out of the air here. (My computers are named after favorite albums with neat covers, this one's inspired by "Time Loves a Hero")
  2. That's the theory. If you actually test it, when the buffer empties significantly the catch-up work puts some extra strain you can measure on the CPU and on memory. And the application will feel that disruption in the timing sensitive part that's feeding data to a DAC. (I work on latency instrumentation for database engines, there a big CPU/disk monster called a checkpoint is a regular source for "playback" threads starving) Now, I'm with you that this shouldn't ever become actually audible in the trivial cases. But I'm the kind of skeptic who is skeptical of the other skeptics. There's a real cause and effect here that I know exists in this type of network application. There are reports that it's audible. If that's wrong, I want to squeeze the claims on all sides until the truth pops out.
  3. QoS and features like VLANs add extra processing complexity and latency inside of the switch's internal processing logic. If you must have those things to keep network congestion under control, they're a miracle. When you don't need them, you can just as easily detune your system when you turn them on. That's why the Dante guys say not to use QoS on small systems. I think a chunk of the audiophile community that's chasing after low latency is staring at half of the problem. When audio is flowing over a packet based network, latency by itself doesn't mean anything. 1ms, 10ms, as long as it's a constant number everything will be received with the same timing as it was transmitted. The problem that impacts audio is variation in latency. That's usually correlated with total latency, but they're not guaranteed to line up. You can have a low latency magnitude but a lot of latency variation; that's technically a worse switch for audio than a high latency one with low latency variation. The advanced features like QoS and VLANs all drive latency variation up because now the switch has more decisions to make, and that means more code paths with different lengths to each one. The ideal switch for audio takes exactly the same amount of time to deliver every packet because there are no decisions to be made.
  4. And going through the LAG processing makes latency less predictable. Does it slow down delivering to two ports instead of one? If both are active, what if one of them has an error but not both? I would not want to touch that feature unless I have a lot of time on my hands to experiment with it.
  5. I ended up buying 3 GS108T switches around 2011 or 2012 because they were one of the cheapest management switches around, and I was already a long time user of their unmanaged switches. They have been very hit or miss for me. Never had any issues on the performance, they've always been fast and easy to manage when they're working. First: GS108T V1H1, Software 3.0.4.7. Works perfectly. Second: GS108T V1H1, Software 3.0.4.10. Each time there's a power spike in the house, this switch has to be rebooted. The switch itself is on a UPS, but not every single device is. It loses its mind from that pretty routine situation. And, before someone asks, it did not go away downgrading back to 3.0.4.7. Third: GS108T V2H1: This one gave me so many weird problem just during installation and testing that it lives in the spare parts bin instead of the live network. That was many years ago so the later firmware may have improved things. I suspect though that I just got a lemon because the V2 hardware was brand new. My take: if your switch works for you, try some testing that involves nasty power cycling of the unit or devices attached to it. Flipping a breaker is how I stress test; I'm pretty hardcore about this. If your switch works through the testing, I wouldn't stress about it further. Every manufacturer has bad models and you may not see one the way I did.
  6. Intel tends to put the most L3 Unified cache into its most powerful processors. Picking that one thing as the important one out of the latency processing pipeline has to be a correlation only error for the subset they looked at. That by itself certainly doesn't cause the difference. Some of AMD's chips with a positively giant L3 cache have terrible latency, because they added it trying to make up for deficiencies in the ram<=>L3 interface of the hardware. I look at the max transfer rate between RAM and the caches before I care about the L3 size itself. The cache misses are where the real latency spikes are at, and those are being delivered by RAM. I suspect a good bit of the subjective sound quality differences people here between computer hardware come from the vast quality difference in the output circuits. If you're taking a USB output from your PC, you already have a latency issue so large your CPU cache one is barely worth noting. There's a lot of variation in USB chipset quality.
  7. These are all advanced network features that you're never likely to use in a single home install. If you had a lot of people using the network you might create a distinct VLAN dedicated for audio, or you could use QoS to limit the rest of the house from gobbling too much bandwidth with Netflix or something. Putting a lot of people on the network seems the opposite of your goals though. It's possible to drop the CPU usage on your hardware by turning on Jumbo Frames, which used to be necessary to get full performance from 1000Mbps links. But everything has to support that and be configured for it, which is a pain even when it's needed. There's very little reason to consider it at all nowadays, and no reason I can think of for it in an audio environment. The latency of this grade of network switch is so low you might as well be worried about sunspots as it impacting your sound quality. The one thing that can ripple out to audible blips is automatic rate negotiation. Your network devices and the switch are going to talk to determine the highest speed they can both handle. Rarely, that negotiating can happen more than it's supposed to, and the switch will bounce between two speeds--"flapping" is the usual word used. I wouldn't worry about that either, not unless you notice the link is dropping regularly or there's a lot of messages about rate changes. If you really want to be paranoid or you run into a problem, what you do is fix the rate of the equipment so it will only run at one speed. Instead of letting it pick 100Mbps or 1000Mbps, you only let it run at the speed you know works. This also used to be a regular thing to tune, but you'd have to be pretty unlucky to see flapping pop up at home here in 2016.
  8. The remasters on this hit a wall some time ago, where they're bumping against the age of the tape as much as the transfer. I have most versions of this around but stopped being really fanatical about it because the remasters were not very rewarding. 2003 "30th anniversary" SACD. Sound isn't very exciting and I wouldn't bother trying to chase it down at this point. 2010? SHM-SACD from Japan. This is the one I like best, replacing the MoFi CD. Comparisons are tricky because as it sits on my computer today, the level is slightly lower than the other high-res versions. Acoustic Sounds superhighrez DSD: this seemed similar enough to the 2010-ish SACD that I don't even have a copy on this computer for direct comparison. 2013: "40th anniversary" / "Super Deluxe". Available at HDTracks (what I tested) and I suspect the same audio is on the Blu-Ray disc. This version is a good bit brighter than the earlier remasters. Sounds like they EQ'd the top up to try and reverse some of the tape degradation. I don't like it; too much cymbal sizzle for me after listening to a softer transfer all these years. I'm sure some people prefer that tilt.
  9. All of the versions of this album sound pretty bad. The MoFi version was a little brighter and (probably correspondingly) noisier than the 1986 CD. There was a bit of a hard edge to that Sony/Legacy series remaster, wasn't a fan. Other people liked it. There is a 24/192K version at HDTracks, it's really nothing special. I'd just get it or the remaster and be done with it. It's not like there's a good sounding version you can chase down somewhere. I wouldn't miss the MoFi one if I got rid of it.
  10. You have saved my sanity, that's exactly it. If I remove the associated cue file all the problem albums are fine. I've probably had this issue for a while then, just didn't notice because I rarely play regular CD rips through the player. (Yesterday I started playing with Audio Units and wanted to test CD playback too) These tracks are already separated. All my .cue files came from either EAC or XLD ripping into individual FLAC files, but saving the cue file so I might recreate the disc TOC more accurately on a future burn. I listen to prog rock; there's even some indexes I'd want to preserve if I made another copy of some discs. I'd rather not trash the ~500 cue files I have around for this purpose already. I'll look at a few of the problem ones and see if I can discover some pattern to why Audirvana is confused about them.
  11. Audirvana still works great for all the hi-res sources I play--SACD ISOs are so great--but this week it's driving me crazy with some regular CDs. I'll document my descent into madness for others who might follow. Most CDs are fine. There's 5 directories I've found so far where I import tracks that are labeled wrong; not everything but I keep finding more. Some tracks are just skipped. Having track 1 labeled as track 2 seems the most common problem. Here's the most dysfunctional one I've found so far as proof I'm not making this up: Track 11 is highlighted, but that's the track details for track 10. Tracks 2, 4, and 10 are missing altogether. Not obvious from this shot: track 3 actually plays track 2; track 5 plays track 4; track 6 plays track 5. Running on Mavericks. Noticed with initially with 2.3.3, applied update in the thread here to 2.3.3.8 since I was also having the problem with badly rendered titles; still there. Already re-installed, destroyed the database, re-installed again, and only added some specific directories. There's no iTunes integration; duh. Everything still plays with Clementine so I know the files aren't broken. Still hoping to isolate the problem with a second Mac and trying to find earlier versions, but of course this is happening while I'm away too. Any suggestions will help preserve the desk I'm banging my head into.
×
×
  • Create New...