Jump to content
Computer Audiophile


  • Content count

  • Joined

  • Last visited

About ednaz

  • Rank
    Freshman Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ednaz

    Article: Naim Uniti Atom Review

    GRONK! That's the sound us dinosaurs make... I'm also a written review guy. I read exceptionally fast, but I can only watch video at the speed of the video. Life's short enough already...
  2. Interesting set of ideas. I worked on similar ideas around corporate knowledge assets and knowledge management. In a big corporation, any given presentation may be stored thousands if not hundreds of thousands of times. Some of those versions deviate from the original - some minor and harmless, some totally unauthorized and inappropriate. Using asset discovery applications, we found that way over half of corporate storage footprints (can't share actual numbers... project is ongoing) were consumed by those replicated assets. We found that with a "customization veneer" - which stores my personal speaker's notes for some presentation, and that when it's served to me it should have certain pages removed, since I deleted them in my version - we could get even more efficient in storage footprints. So, I get what you're doing on the back end with single stored asset, key based access. A couple requirements from me based on having had to think this through in another context. We weren't just trying to shrink centralized storage footprints (since storage is so cheap... but corporations are so cost cutting focused.) A bigger goal was to shrink edge devices - why should I need a 1tb hard drive on my laptop? Why a laptop? That was part of an even bigger goal, which was the ability to use a wide range of endpoint devices. Why should I carry anything other than my mobile phone? Or, I may LOVE having two phones, two laptops, and a tablet. (Yes, we found several people who liked living that way.) So, think about a variety of endpoints. I should be able to get my music to and from any device. I share this part of the goals because it's turned out to be a hard one for us, in today's world. Network quality is not evenly distributed, nor are networks ubiquitous enough. Closer to your project, I listen to Radio Paradise on my mobile by caching, since I can't count on being able to stream while I'm in my car, on an airplane, in the subway. My concern with the model of Tonal is that an awful lot of my listening is done in offline mode not by choice. Key requirement for me. Stream AND store. As to the customization veneer - I've got experience with Roon now, and one of the things I dislike is metadata standardization that doesn't match my mental model for organizing music. "World music" is a meaninglessly vague high level category to me, and assigning an artist to pop, rock, alternative rock, R&B, blues, etc. in a standard way sounds nice, but there are New Orleans artists who could be put in every one of those... but in my mental model aren't any of them - they're in one of four or five categories of New Orleans music. I divide "classical" into six categories. My metadata categorization when I use JRiver is very personal (maybe even quirky) but it allows me to quickly find what I want to listen to, and quickly assemble playlists. So, I want to be able to selectively over-ride, or add my own layer, to metadata. A thought about back end security and data integrity, which people (rightfully) brought up. The approach we landed on was secure slice object storage. Extremely efficient (no replicant copies required), and insanely resilient, because it stores object slices across an array of clouds, internal or external or both. The minimum number of disks/servers is quite high for your current work, but at scale I think you'd be in a "mid size" environment. Unfortunately my two Mac laptops are both in repair for the second time each... but once I get one of them back, I'll give Tonal a try.
  3. But, the real question is, how do they SOUND.
  4. Exactly. The whole HDR trend in photography is the visual equivalent of raising the pitch and volume and emphasizing the midrange in audio. But, all of that done subtly is there in every professionally printed image you see, the same way that production of music tweaks sound to be more appealing. Even guys like Jay Maisel, who shoots JPG and claims he doesn't do any post processing... well, HE doesn't do his printing, and that's where it happens. Other than bootleg concert tapes, what we hear can be very much the result of artful post processing.
  5. You do realize that most studio albums where some performers were in soundproof booths have reverb added? I've heard the raw tapes (a big part of my photography was for jazz and blues musicians for CDs and PR shots) in the studio. "Dry" sax or trumpet doesn't sound pretty. I've watched the sound engineer add slightly different reverb to different instruments because just adding it overall sounds artificial. And that's just one of the normal things done in the recording and production process. I think the only place you'll actually get raw music is in the performance itself. And, most of what's done in production is to make it feel more real and more alive. Psychoacoustics is a real thing.
  6. I didn't invent this. It's standard practice among fine art photography printers, like Duggal Imaging in NYC and Nash Editions in CA. Has been for years (I apprenticed for a couple months at a couple of printers back in 2001 just to learn technique, and those guys had been doing it since professional digital cameras were 3 megapixels.) The whole process of sharpening is completely about adding artifacts, and that's been done ever since there was photography and a desire to create the impression of sharpness. The brain sees things, not the eyes. Every one of those techniques is about changing how the brain perceives things. The same is true about sound - the ears capture but the brain hears. Hence psychoacoustics.
  7. I'm a process guy. I can get better performance out of people than I could ever give myself. There's a whole profession of that. Also helps to be good at exotic math. Your last paragraph is exactly my point when I compared to photography. I make photographs feel more real, and more natural, by adding distortions that aren't there in the original image. Some it is adding artifacts and noise. Some of it is excluding information. The image is qualitatively improved by quantitatively degrading it. When I read comments from people who like MQA and think it improves things, and then I see the data you presented, that's what comes to mind. Your examples (or maybe it was in one of the articles linked) of a DAC that takes the pitch of A up a bit higher (welcome to the 1960s pitch wars), of the files being a bit louder, of noise showing up quantitatively that isn't there in a non-MQA file, and the filter being limited to post-ringing, all felt to me like engineering for psycho-acoustics and not accuracy. That's why the McGill study will be interesting.
  8. I certainly wouldn't go out of the way for MQA. The little bit of testing I tried (low patience level) led me to think it did seem to make a difference with some music - at least in what I heard. Music where I thought, OK, that's a little better was: music with a lot of spatial information, like live albums where you can hear the venue, music with a lot of dynamic range, combos and not orchestras. All that on headphones. But not a lot, and not consistently. Made me think that part of what it does is upsampling with a little tweak of spatial reverb. Interesting that the filter post-rings. What difference I thought was there was less than what a slight upgrade in DAC would beat - if I had an incremental dollar I'd buy a dollar of incremental DAC performance.
  9. "Watch your step." What kind of comment is that? Lovely. Not an insult at all, but a statement of fact. Most people don't do cross-analogy thinking very well. After 35 years of leading teams in neuro-science, computer code reverse engineering, photography, drug discovery, entity analytics, cryptography, human systems analysis, and more, I think I have a good perspective on that. There's a well documented phenomenon of "expert blind spots" - the deeper an expert someone becomes, the more certain they are about every aspect of their discipline. I'm a professional "dumb question" asker. Fun way to make a living, never bored. Particularly for a musician and theater major. People claiming they can detect jpg vs other file captures remind me of people who say they can always tell lossy compressed music files from non-compressed. I can with a lot of music, but when you get Alabama Shakes running a DR of 3... can't any more. I think that may be part of what's going on in the MQA arguments, and is a variable I can't remember being explored. People say that sometimes they sound better, sometimes just not worse, sometimes worse. Wouldn't it be interesting if, instead of challenging their hearing or expertise, someone tried to understand it? Took awhile before someone noticed that the volume levels of files, and pitch, influenced how people heard them. I've long wondered if DR=3 versus DR=15 might be part of that. Alabama Shakes at high bit MP3 doesn't sound much different to me than 24/96. Are people arguing about MQA missing a GIGO problem?
  11. All that scientific fact about moving images (I was talking about stills) ignores what I see about MQA. People with great ears (I was a union card holding musician until I was 30) talk about how MQA has more reality in the sense of environment. Which, given the de-blurring techniques, and noise feedback... makes a lot of sense. Detecting distortions... all those image print techniques are based on taking advantage of our perceptions. I think MQA takes advantage of our perceptions. Not reality. Brains hear. They function at a pretty damn low sample rate. Incidentally, I'm not talking about JPEG. Not even 100% JPG (although if you're willing to put up a big bunch of money I'll let you try to prove to me that you can ID JPG distortions.) I'm talking about raw images, or 16 bit TIFFS, in color spaces way beyond Adobe RGB. Really, assumptions make... you finish it. I'm also not talking about Joe from the Street looking at images, I'm talking about high name recognition photographers. BTW, I've spent my last few years building real time environments for visualizing brain activity in multiple types of visualization (fPET, fMRI, ERP) technologies overlaid... We may sample sound frequently but we don't use it. Your brain's activity sets early in a listening (or viewing) session and rolls it forward. Brains are lazy. Like humans. And I suppose I'm assuming an ability to grasp analogies. I could well be wrong. Most of my work is based on cross domain analogies, but not everyone can do that.
  12. Watching the arguments of what's kept versus thrown away, what's real and what's invented, is it debarring or blurring, is it just upsampling, wait is that noise, brings to mind something from another domain - photography. When printing digital photographs at display sizes - 16x20 inches, 20x30 inches and larger - professional printers, the type that would print images for a gallery or museum show, do a couple of tricks to every image, just before printing. (Learned these working for a famous NYC fine art printer.) First, they apply an unsharp mask to increase the apparent sharpness (de-blurring), which paradoxically works by applying a mildly blurred image as a mask on to the original image. De-blurring by adding a mask of blurring. It raises the apparent sharpness of the image. Done well, it's not noticeable. Done poorly, you get visible halos in the image. Note that even done well, there are halos - the unsharp image absolutely makes them, but they're below your ability to see them - a pixel or two wide. (I learned to do this in film days. Much easier in digital.) Second, they add noise to the image. Everyone evaluates digital sensors based on their ability to produce an image free of digital noise... but completely noise free images look odd. In large areas with no detail - sky, a car fender, still water - they look artificial and plastic. The printer uses one of a number of techniques to generate digital noise that's similar to film grain, and blends it into the image. The size and frequencies of the noise are based on the size of the final print. Again, the goal is to have it be there and effective but not noticeable. (When I show people prints where I've done this, they have a hard time detecting it, even after being told what to look for.) That added noise does three things. It makes the image seem more real, and less digital. It reduces the visibility of actual digital noise from the sensor. And, it also increases the apparent sharpness of the image. In photography - which is about capturing the most accurate renditions of light and color with a recording device and then reproducing them for viewing - adding information that was never there to begin with increases the perception of it being a more accurate and sharply rendered image of the real world. I imagine that the same types of tricks, applied to audio files, may improve the apparent accuracy and crispness of the rendering of recorded sounds. After all, we see and hear with our brains, not our eyes and ears.
  13. ednaz

    Article: Schiit Audio Reference System Review Part 2

    Good review. The ring of truth comes in how you looked at the performance as a system. In any technology, a system isn't just the sum of its parts, it's some kind of function. I've heard really inexpensive systems - inexpensive end to end - that sounded better than some poorly thought through (or poorly matched) really expensive systems. I've experienced this "systems" issue myself. I've got a set of Gradient Revolution speakers that blows away every one who hears them. Then my Krell integrated died, and I got an Anthem integrated, similarly rated on power and other characteristics to the old Krell, to hold me over. I was SO disappointed in the sound. Flat, lifeless. Was it the Anthem integrated? I tried it with other speakers... it sounded wonderful with the other three sets of speakers I tried it with - two much cheaper than the Gradients, one significantly more expensive. I ended up moving a Peachtree integrated to the room with the Gradients. It's so much less expensive than the Krell, lower power than you'd usually match up with them, but I think it sounds way better. BTW, that Peachtree integrated was unimpressive with two other sets of speakers. I've also experienced cable differences. The GoldenEar Reference in my AV system hated my ultra-high end cables that had been in that system for a long time, but have a love fest with some middling ones. Those ultra high end got transferred to the Peachtree Gradient system, and have found a home. For some reason, they match beautifully, and raised the whole system. So the speaker differences, cable differences? If you had found all the swaps to be linear in performance based simply on price and individual component performance, I'd have been skeptical. I can't explain all of it. But I sure can hear it.
  14. ednaz

    Article: Readers Choice Awards 2017

    On the cost of Roon... I admit that was one issue for me - one among many initially and one now when I was listening to the free trial and thinking, damn this is good. My extremely significant other leaned hard on me about that by running me through other things we "subscribe" to... Adobe takes $20 a month out of my pocket for use of their photo editing software. I pay another imaging software vendor about $140 a year. Linked In hits me each month. Netflix takes their monthly bite, Amazon Prime their annual bite. My Roku is full of ways to hand over $10-20 a month to someone. I send money to NPR, PBS, Radio Paradise a couple times a year. Then there's the backup service for my music library, and another for my phone. When she then started to pull up the bank accounts to look at what I spend on HD downloads... I ran screaming from the room with my fingers in my ears... and bought Roon for the year, along with Tidal. If that combo cuts my HD downloads in half, it's a hell of a deal. I'm still keeping JRiver, and keeping it current. It's just so convenient for my laptop traveling (four different laptops), and it's my fall back if some day I have to choose between Roon and dog food for dinner.
  15. ednaz

    Article: Readers Choice Awards 2017

    I'm completely with you on that. I tried Roon twice before - once when they were brand new, once a year or so later. Underwhelmed, but more than that, annoyed by this or that weirdness in their implementation, and the sound quality... well, JRiver just trounced them. At least to my ears. I signed up for a trial again a couple weeks ago. Holy moly. The implementation is smooth and consistent. The UI is quite good, and consistent enough across devices to make it easy to use. One thing that floored me was, I saw that they did Logitech support. OK, cool, let's see. Blew me away when I wandered past my Transporter and saw the Roon logo on the LCD display. They didn't just accommodate the Logitech legacy devices, they embraced and integrated them. Those Transporters (I have two, plus one Touch) sound better than they did with the Logitech server software. And then the sound quality. I wasn't the first to weigh in. First time I fired up the system with Roon with family here, I played some music that everyone likes (tough in families with a wide age range.) EVERYONE said, wait... we've heard this before, but why does it sound so different? When I said "Different?" Every one of them said "Better." Comparing it to JRiver on multiple output devices - it's better on most, equal on one or two. It's the product of the year here... a no brainer buy. Impressed.