• JRiver Mac vs JRiver Windows Sound Quality Comparison

    I have been listening to JRiver Media Center on Windows for almost two years and have been a happy customer. JRiver on Windows is extensively reviewed by Chris.
    Now that an early release of JRiver is available on the Mac, I thought I would take the opportunity to compare the sound quality between the two JRiver music players.
    Similar to how I compared JRiver to JPlay, I am using the following test methods and tools to compare SQ:

    • Using Audacity (or any digital audio editing software) to digitally record the output from JRiver on both Mac and Windows. Then by editing and lining up the track samples, inverting one of the tracks, and mixing them together, we will see what audio signal is left over (i.e. the difference file) and whether it is subjectively audible.
    • Using Audio DiffMaker, that is purpose built software for audio differencing tests, to analyze the two recordings, which also produces a difference file that can be listened to and subjectively evaluated.
    • Using Foobar’s ABX Comparator to listen to each recorded track and determine which one sounds different or subjectively better.

    The Audacity recordings of the JRiver music players on both Mac and Windows are included in this article so people can download and subjectively listen to and objectively inspect. Given that the test software is freeware, I designed the article to follow a step by step process, so if inclined, one can repeat the test procedures and see if the results are repeatable.









    Test Configuration and Recording Process

    The Windows computer is an Intel 3.30 GHz i5-2500 quad core with 8 GB of RAM and running Windows 7 64-bit operating system. The MacBook Pro is an Intel 2.26 GHz Core 2 Duo with 8 GB of RAM and running OSX version 10.8.2. On Windows, I am using the ASIO version of Audacity and on the Mac version 2.0.3 to record the audio bitstream from JRiver. For a DAC, I am using a Lynx Hilo, which by one objective measure, rates as one of the most transparent A/D D/A converters on the market today. The Hilo has the capability to patch (sometimes called digital loopback or route) any input to any output. As confirmed on the Lynx support forum, the audio bitstream is going from JRiver, through the ASIO driver, through the USB cable, into the Hilo, and then clocked back out the Hilo, through the USB cable, through the ASIO driver, and into Audacity. I am routing the output of JRiver to input USB Play 1&2 on the Hilo and patching it to output on USB Record 1&2 which is the input to Audacity.

    Looks like this configured on Hilo’s touch screen:






    With the Hilo I can simultaneously play audio from one software application (e.g. JRiver) and record the same audio in another application (e.g. Audacity). On Windows, it looks like this:




    On the Mac, it looks like this:




    I am using Tom Petty’s song Refugee that I downloaded directly from Tom Petty’s site which is recorded at 24/96. The Producer/Engineer’s provided a note of provenance (PDF) to go with the download, so I feel reasonably comfortable that this is as close to the master as one can get: “We made the FLAC files from high-resolution uncompressed 24-bit 96K master stereo files. When we compared those files to the FLAC’s, the waveforms tested out to be virtually identical.”

    In Audacity, the only change I made was to set the project sample rate to 96 kHz and bit-depth to 24 under the Edit menu->Preferences->Quality. Dither will be discussed later.

    Note that the “bit-perfect” light is on in both versions of JRiver while I was recording, indicating that the output of the player is streaming bit-perfect audio at 24/96 to the DAC. There is nothing else in the signal path, all DSP functions were turned off, and with ASIO, any intermediate audio layers in Windows are bypassed. All levels were set at 0dBFS and I used the stock USB cable that came with the Hilo.

    Here is what the Windows recording looks in Audacity:



    Here is what the Mac recording looks like:




    I used Audacity’s Amplify effect to validate that both recordings were recorded at the same level. Note I did not apply the amplification, this is for viewing only. On my first Windows recording, I accidentally moved the JRiver internal volume control down -0.5 dB, so the levels did not match. I did not find that out until the end and had to rerecord the Windows version. With everything set at 0 dBFS on JRiver, Audacity, and the Hilo, on both PC and Mac, the recorded levels should be exact, as depicted above. Uses the Amplify window to validate the recorded levels are the same before moving on to the next step.


    The Editing Process

    I recorded the full length of Refugee on both Mac and Windows using Audacity. First I clicked on Record in Audacity and then Play in JRiver. Once the song had played, click stop in Audacity and save the project to disk. I copied the Mac Audacity project file (.aup) and data files onto my PC and opened it up with Windows Audacity version. For waveform sample comparisons, I edited both the Mac and Windows recorded versions to roughly 60 seconds and tried to ensure that I edited the start of each track at the same sample. Windows version on the left, Mac version on the right, and I have zoomed way in to see each individual sample and placing the selection tool at the same sample point for each track:




    5,760,000 discrete samples is a good enough sample size to compare waveforms. If there is an opportunity for human error, it is in editing the start of each recording so they line up at the individual sample level.

    Resizing the waveform display windows also draws the data differently and makes it hard to properly edit. It took me more than a few tries to get it right and in the end I reverted to having the two editors opened side by side, like above. Pick a reference point and count the samples to get them aligned. One sample off will show up in the test.

    Now that I have lined the samples up, I can shift select everything to the left of the cursor, and using the cut tool, remove the samples:




    Now I can enter in 5,760,000 samples in the Selection Start field and shift select to the end of the recording. Finish by clicking on the cut (scissors) tool:



    Now I have exactly 5,760 discrete samples to export to disk:




    I followed the same process for the Mac version of the recording.



    The Comparison Process – Audacity

    Now that each of the Windows and Mac recorded samples have been digitally edited to be the exact same number of samples and hopefully the same start and end points, I can use this simple procedure to compare the two recorded tracks:

    • Import copies of both files into the same Audacity project.
    • Highlight one of the tracks, and under the Effects menu, select Invert.
    • Now highlight both tracks and under the Tracks menu, select Mix and Render. What’s left will be any difference between the two sets of recorded tracks. Save to disk.


    Here are both tracks loaded into Audacity, the top one is the Windows recorded version and the bottom one is the Mac recorded version:




    Next is highlighting and inverting one of the tracks:




    Finally, choose Mix and Render from the Tracks menu:




    This is the difference:




    No difference. Ah, but you may notice something. While I inverted one track, and highlighted both tracks, I did not mix and render, I went straight to plot spectrum.
    If I mix and render, then plot spectrum, I get:




    Note the microscopic signal that is -144 dB at 48 kHz. I do have dither turned off as per Audacity’s recommendation for export. However, reading over their lengthy description, there appears to be opportunity for inaccuracies. Additionally, looking at Audacity’s bit-depth recommendations, I should have left the default recording quality at 32-bit float and not 24 bit as -144dB is the theoretical signal to noise limit for a 24-bit digital media file. In the end, it is a moot point as -144 dB is below our absolute threshold of hearing. What does this mean? The Audacity difference test indicates that any sound quality difference between JRiver Mac and JRiver Windows is inaudible. Even if the measured difference was considerably more, say -120 dB, it may be barely audible, with headphones on, with the volume at maximum, and in a very quiet environment. However, it would be completely masked at regular program levels (e.g. -3 dBFS). If one wants to try to determine his/her ability to hear masking, try Ethan Winer’s, Artifact Audibility Comparisons.

    To verify the differencing process, here is a “control” sample of following the exact same procedure as above, but comparing the same file, in this case, the Mac file to itself:



    No difference. Check.



    The Comparison Process – Audio DiffMaker

    Audio DiffMaker is purpose built software specifically designed to automate what was done manually above as can be seen in its workflow:




    Furthermore, the differencing algorithms for time alignment and amplitude matching are optimized for this type of testing. The Help file is an excellent resource as is the AES paper on the subject of difference testing, along with Power Point slides. I am not going to go into detail as the software is readily available (i.e. free). I have also used the software in a few of my blog posts on CA, which go into more detail about test setups, software usage, and tool issues to work around.

    The process is the same as Audacity’s, except all one needs to do is load the 60 second recorded tracks, click the extract button, and watch the software work for about 10 seconds:



    Rather than trying to explain what is meant by correlation depth, one can read up on that in the DiffMaker links provided earlier in the article. If I take the DiffMaker generated difference file, open it up in Audacity, and take a frequency analysis:




    It is identical to the Mix and Render version of Audacity screen shot, right down to the decimal place. Note that Audacity opens it up as 32 bit float, yet it is a signed 24 bit PCM file. From Audacity’s dither article, there could be a very small error introduced. I should note that the first 180 milliseconds of the difference file has been edited out as that is the time it takes DiffMakers algorithms to find the correlated depth and leaves it’s processing in the file.

    As before with the Audacity test, here is the control measurement of using the same file to compare to verify the process. This time I compared the Windows recorded version to itself in DiffMaker:




    As can be seen, compares perfectly to the maximum correlation depth of 300 dB that DiffMaker is capable of. Opening the control difference file in Audacity:



    No difference. Check.


    Foobar ABX Tests

    I went into this with an expectation bias knowing there are no audible differences as verified by the two previous tests. As much as I wanted to hear a difference, both the Mac and Windows versions sound identical to me. I ran several passes in the ABX Comparator, but there is no point in posting any results as I was guessing close to 100% of the time. However, here are the two 60 second recordings of JRiver Mac and JRiver Windows so anyone can compare the files both subjectively and objectively.

    JRiver Mac 60s 33MB

    JRiver Windows 60s 33MB



    Bonus Comparison

    While I had everything set up on JRiver on the Mac, I thought I would try swapping only one thing in the audio signal chain and compare that to the recording I have already made on the Mac. So I swapped out the USB cable, changed nothing else, and made another recording:




    The one of the left is a 6ft cable that came with the Hilo. The one on the right is a London Drugs special, 5 meter shielded USB cable for $29.95. According to the Lynx Hilo manual, the longest USB cable to be used is 15 feet. The one under test here is 16 feet. Using the same Audacity procedure as before, but comparing the Mac recording with another Mac recording with the only difference being the USB cable:




    No difference. Again, this is by loading the two different recordings, inverting one of the tracks, selecting both sets of tracks, and plotting the frequency analysis.
    If I apply the Mix and Render, I get exactly the same result as both the Audacity and DiffMaker versions with the microscopic -144 dB signal at 48 kHz. And if I run the same test in DiffMaker, exactly the same result as the previous DiffMaker test between the Mac and Windows versions. As to why other folks are possibly hearing a difference using different USB cables? One anecdote is from Ken Pohlmnan’s excellent book, “Principles of Digital Audio”:




    Audiophiles have sometimes reported hearing differences between different kinds of digital cables. That could be attributed to a D/A converter with a design that is inadequate to recover a uniformly stable clock from the input bitstream. But, a well-designed D/A converter with a stable clock will be immune to variations in the upstream digital signal path, as long as data values themselves are not altered.

    As an aside, if I was to recommend one book on understanding all of the facets of digital audio today, it would be Ken’s book. The first edition appeared in 1985, and now in its sixth edition, spans 28 years of industry knowledge. There is probably no-one that knows more about digital audio than Ken Pohlmann and that knowledge is captured in his book. Highly recommended if you wish to pursue a University level understanding of how digital audio works.

    Here is the 60 second recording on the Mac using the long USB cable:

    JRiver Mac USB long cable 60s 33MB




    Conclusion

    Based on three different test methods, that I repeated more than a few times, the results indicate there is no measurable or audible sound quality difference between JRiver on the Mac versus JRiver on Windows. One could argue all I did was validate what is already known, that everything is operating to specifications. In other words, bit-perfect:

    In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough. The data stream (audio/video) will remain pure and untouched and be fed directly without altering it. Bit-perfect audio is often desired by audiophiles.

    As to the reasons why this is, and if interested, I recommend Ken’s, Principles of Digital Audio book. Check out the TOC. What I really like is that the Sampling Theorem is an Appendix and the rest of the 800 pages cover literally every aspect of digital audio in every industry. Digital Audio is much more than a sampling theorem. If interested, anyone can use the same (or similar) software tools and this process to validate that the results are repeatable. One could use the files supplied, or if the DAC used supports the capability to digitally playback and record independently, one could start from scratch and validate the results are repeatable. However, I would recommend making the recordings at the maximum resolution (32-bit float in Audacity) to avoid any math or accuracy discrepancies that may occur at 24-bit resolution.

    In the meantime, enjoy the same JRiver Media Center sound quality whether on PC or Mac.










    About the author


    Mitch “Mitchco” Barnett
    I love music and audio. *I grew up with music around me as my Mom was a piano player (swing) and my Dad was an audiophile (jazz). *At that time Heathkit was big and my Dad and I built several of their audio kits. *Electronics was my first career and my hobby was building speakers, amps, preamps, etc., and I still DIY today . *I also mixed live sound for a variety of bands, which led to an opportunity to work full-time in a 24 track recording studio. *Over 10 years, I recorded, mixed, and sometimes produced over 30 albums, 100 jingles, and several audio for video post productions in a number of recording studios in Western Canada. This was during a time when analog was going digital and I worked in the first 48 track all digital studio in Canada. Along the way, I partnered with some like-minded audiophile friends, and opened up an acoustic consulting and manufacturing company. *I purchased a TEF acoustics analysis computer which was a revolution in acoustic measuring as it was the first time sound could be measured in 3 dimensions. *My interest in software development drove me back to University and I have been designing and developing software ever since.*













    Comments 98 Comments
    1. The Computer Audiophile's Avatar
      The Computer Audiophile -
      Hi Gordon - Every version of this article was run by Matt at JRiver before publication to make sure the company had no issues with Mitch's methods and use of an Alpha version of JRMC.

      We have to use the testing methodology available to us not the methodology we wish we're invented. I completely get what you're saying but this article simply provides a limited set of facts readers can use or set aside.

      The article is just a snippet of objective information not meant to be the final answer on anything.
    1. Miska's Avatar
      Miska -
      To put is simple, this test is complicated way to prove that both players can do bit-perfect playback...

      That's not going to tell much about how it sounds or how the analog output is going to look like.

      Usually just measuring DAC analog output with spectrum analyzer while playing back dithered 24-bit silence is enough to show differences between two source computers. I'm usually using 500 kHz measurement bandwidth and a 1M point averaged FFT for that and then zoom & pan the output plot to inspect the noise floor.
    1. Jud's Avatar
      Jud -
      Quote Originally Posted by Miska View Post
      Usually just measuring DAC analog output with spectrum analyzer while playing back dithered 24-bit silence is enough to show differences between two source computers. I'm usually using 500 kHz measurement bandwidth and a 1M point averaged FFT for that and then zoom & pan the output plot to inspect the noise floor.
      Putting this together with Gordon's impression that the Mac version of JRiver sounds better to him than the Windows version on Boot Camp on the same computer - Would you happen to have a Mac to test with, Miska? I have no idea whether it would be at all responsible for any audible difference, but I wonder whether your measurement described above would show any difference between the Mac hardware working with OS X drivers, and working with Windows drivers.
    1. Paul.Raulerson's Avatar
      Paul.Raulerson -
      Hi Chris- the conclusion I read in the article was that JRMC sounds exactly the same under MacOS and under Windows. Implied was anyone hearing any difference was just imagining it, with proof of that being the test results and explained by the reference material quoted.

      Many folks disagree with that conclusion, and the implications implied, including me. But I am sure just as many agree with them. This is a very tricky subject, with a history going back to "all amps that measure the same sound the same" type thinking.

      In the meantime, enjoy the same JRiver Media Center sound quality whether on PC or Mac.
      Paul


      Quote Originally Posted by The Computer Audiophile View Post
      Hi Gordon - Every version of this article was run by Matt at JRiver before publication to make sure the company had no issues with Mitch's methods and use of an Alpha version of JRMC.

      We have to use the testing methodology available to us not the methodology we wish we're invented. I completely get what you're saying but this article simply provides a limited set of facts readers can use or set aside.

      The article is just a snippet of objective information not meant to be the final answer on anything.
    1. Pale Rider's Avatar
      Pale Rider -
      Outstanding work. Thanks for this detailed contribution.
    1. ted_b's Avatar
      ted_b -
      Quote Originally Posted by Audio_ELF View Post
      But this thread wasn't about "how it sounds" it was about how they measure.


      Eloise
      The darn (polite enough?) title says "sound quality comparison"! Why are you making my comments out to be some sort of whacked left field perspective? I am not alone here. I am not questioning his efforts or techniques (in fact they are models for other articles), just asking the follow-up..how did they sound on the two platforms!!!

      I could take two Chevy Cleveland 500 engines, built in different shifts for different car mfgers, and put them on a test bench and measure that they have the same horsepower, torque, etc...but I'd think they would perform quite differently if dropped into say, a Corvette vs a Mack truck. That's all I'm asking.
    1. Audio_ELF's Avatar
      Audio_ELF -
      Quote Originally Posted by ted_b View Post
      The darn (polite enough?) title says "sound quality comparison"! Why are you making my comments out to be some sort of whacked left field perspective? I am not alone here. I am not questioning his efforts or techniques (in fact they are models for other articles), just asking the follow-up..how did they sound on the two platforms!!!

      I could take two Chevy Cleveland 500 engines, built in different shifts for different car mfgers, and put them on a test bench and measure that they have the same horsepower, torque, etc...but I'd think they would perform quite differently if dropped into say, a Corvette vs a Mack truck. That's all I'm asking.
      Oh well... you can think that if you like...

      To me the article was clearly an objective view (i.e. based on measurements only); yet it's okay for the subjectives to come in and say thats rubbish. Yet if an objective view is posted on a subjective thread then the objective person is belittled and called disruptive (or similar). Perhaps to Mitchco (and I don't want to put words into his mouth) they do sound the same as the measurments would indicate...

      I thought the rules were meant to work both ways but obviously not!

      As for the stupid automotive analogies - they wouldn't measure the same with different shifts...

      Eloise
    1. stevebythebay's Avatar
      stevebythebay -
      I always enjoy this type of stuff. Engineers with all the latest test gear seem to think that just because you measure something you can either "prove" differences or not. Put a human being into the mix who sits and listens and ... bingo - where no difference is "measured" -- a difference is heard. And often double blind testing bears this out in a consistent manner. So, that also deflates the notion that if you cannot measure it, it simply doesn't exist. Science would die on the vine if that were true. Gotta' love it...the human ear/brain is capable of much more than merely trying to measure. Ah, and then there's "perfect sound forever!".
    1. ted_b's Avatar
      ted_b -
      Quote Originally Posted by Audio_ELF View Post
      As for the stupid automotive analogies - they wouldn't measure the same with different shifts...

      Eloise
      ??Huh? A shift is an 8 hour work schedule. The folks on second shift better darn well build the same spec'd engine or they'll lose their jobs. My point was that two of the same engine (i.e JRIVER) can be built to the same spec for different platforms (OSX or Windows) but once dropped into that platform will perform differently, possibly. I'm allowed to ask...really! I haven't broken any rules nor have I called anyone's (especially Mitch's) ideas stupid or even wrong.

      I'm not sure how I've deserved these belittling comments from you; I always thought you were reasonable.
    1. mayhem13's Avatar
      mayhem13 -
      Something the subwoofer and speaker DIY and commercial developers do on other forums. They do GTGs or get togethers with different gear for listening evaluations and blind comparisons. Some are hosted by members in their homes, some are hosted by mfgrs or dealers while others are hosted by vendors. They're the highlight of the year for some folks, traveling sometimes hundreds of miles to share their creations. I think the CA fold would be a great place to do these by region, maybe hosted by local Brick and Mortar shops as a way to get living souls into their shops. A little wine or some local micro brews, great gear and a place for CA members to meet face to face as well as with dealers and maybe guest speakers/ designers local to the area.

      New York would be a great place to kick on of these off so if there's interest, I'll start a seperate thread in the General section for feelers.
    1. The Computer Audiophile's Avatar
      The Computer Audiophile -
      I've long wanted to have CA meet-ups much like the head-fi members have been doing for years.
    1. stevebythebay's Avatar
      stevebythebay -
      Quote Originally Posted by The Computer Audiophile View Post
      I've long wanted to have CA meet-ups much like the head-fi members have been doing for years.
      To pull this off we'd really need the info on member's locations, to see if a quorum could be had. And then, where the real interest would be.

      Many high end dealers do sessions with customers. However, these are almost always synced with their retail brand engagement. It's a fine venue, but limited. And those few retailers who are selling into the computer source space rarely venture beyond DACs, or complete (e.g. Sonos) setups.

      It's a bit surprising, though the whole area of ripping to disk is still a bit of a legal issue for the industry, though it goes on nonetheless. We'd be having "clandestine" sessions, it would seem...
    1. Miska's Avatar
      Miska -
      Quote Originally Posted by ted_b View Post
      ??Huh? A shift is an 8 hour work schedule. The folks on second shift better darn well build the same spec'd engine or they'll lose their jobs.
      When you go to analog domain, like the engine is. There are manufacturing tolerances. And even the safely guarded kilogram models have been drifting over the years (some µg). So in the end it's all about measurement resolution.

      When we deal with symbolic presentation like mathematical formulas or replicating integer numerical values we can gain perfect representations. But when you end up with irrational numbers as result of calculations in real life, you have to decide to stop writing out the decimals at some point. How many decimals of pi are enough for you is your decision, but it won't be perfect.

      So checking if you can transfer data correctly in digital domain is of course fun and elementary exercise, but when you want to see if the analog conversion of digital samples is always the same, finding the difference is only about measurement resolution. At least the thermal noise won't have full correlation on two subsequent runs. (two independent sets of thermal noise shouldn't have normalized correlation factor of 1)
    1. Audio_ELF's Avatar
      Audio_ELF -
      Sorry I misunderstood "shift" I assumed (wrongly) you were saying something about gearboxes (I believe American's refer to a manual gearbox as a stick shift?). Please ignore my comments about the engine though car analogies usually seam very limited to me and virtually pointless.

      I'm sorry you feel I was belittling you; I do agree there is a big "ahh yes but" in Mitch's testing; but at the end of the day it appears like you (and several others) were getting away with "yes but how does it sound" type comments when "ahh but how does it measure" comments are ridiculed and put down on threads that start with purely subjective observations.

      Eloise
    1. Paul.Raulerson's Avatar
      Paul.Raulerson -
      I thought Mitch was quite clear that he could not hear any differences when he listened in the article? That's why this is totally un-arguable, even though one may disagree. Mitch both measured to the best of not insignificant ability, and then listened.

      That he heard no difference at all is startling to me, but not arguable. He heard what he heard.

      I can see where it is a bit on the edge, but I don't see anyone sneering at him, calling him names, or casting aspersions on his character, which happens more often than not in the reverse scenario. Mostly I see people complimenting him on doing a good job.

      I can see the point you are making though. Slippery slope stuff.


      -Paul


      Quote Originally Posted by Audio_ELF View Post
      Sorry I misunderstood "shift" I assumed (wrongly) you were saying something about gearboxes (I believe American's refer to a manual gearbox as a stick shift?). Please ignore my comments about the engine though car analogies usually seam very limited to me and virtually pointless.

      I'm sorry you feel I was belittling you; I do agree there is a big "ahh yes but" in Mitch's testing; but at the end of the day it appears like you (and several others) were getting away with "yes but how does it sound" type comments when "ahh but how does it measure" comments are ridiculed and put down on threads that start with purely subjective observations.

      Eloise
    1. ted_b's Avatar
      ted_b -
      Paul, he only listened (subjectively) to both recordings on one platform, through foobar ABX (which only runs on Windows). If I'm wrong here, then delete all my Mitch questions and Audio_elf arguments. Sorry.
    1. Paul.Raulerson's Avatar
      Paul.Raulerson -
      Had to go back and re-read the article, and yes, you are correct. So we can say that the recorded output from the Mac and the PC, when played back on the PC, sounded the same.

      Not the most reasonable test for saying that the two systems, JRMC on Windows and JRMC on Mac sound the same. Definitely some compelling evidence that they put out the same digital data when recorded though.

      -Paul


      Quote Originally Posted by ted_b View Post
      Paul, he only listened (subjectively) to both recordings on one platform, through foobar ABX (which only runs on Windows). If I'm wrong here, then delete all my Mitch questions and Audio_elf arguments. Sorry.
    1. mitchco's Avatar
      mitchco -
      Thanks for everyone's comments. Btw, did folks download and listen to the files on whatever system and hear any differences? I see no comments about that.

      Just to clear up a couple of points. I did indeed listen, casually over headphones, to JRMC on the Mac and then on Windows. Heard no differences. However, it was not a properly setup ABX test as I need some hardware switching to make that work proper. So I did not include in the results.

      Setting up proper, controlled listening tests are difficult as levels need to be matched to 0.1 dB is one of many considerations. Another point is the interesting research by James JJ Johnston that indicates our short term hearing memory is accurate for about 1/4 a second. Hence the requirement to be able to switch reasonably rapidly between sources.

      Finally, if the bits are identical, as the results show, and that anyone can replicate (anyone up for the challenge?), at the DAC whether it came from JRMC on the Mac or Windows, then how can there be an audible difference? It can't be from the music players...

      I am looking for a listening/measuring test, which is repeatable and can be replicated by anyone, like I did with the tests above, which demonstrates an audible difference that anyone can hear. Let me know what you think that test might look like...
    1. acousticsguru's Avatar
      acousticsguru -
      Quote Originally Posted by Paul.Raulerson View Post
      I thought Mitch was quite clear that he could not hear any differences when he listened in the article? That's why this is totally un-arguable, even though one may disagree. Mitch both measured to the best of not insignificant ability, and then listened.

      That he heard no difference at all is startling to me, but not arguable. He heard what he heard.

      I can see where it is a bit on the edge, but I don't see anyone sneering at him, calling him names, or casting aspersions on his character, which happens more often than not in the reverse scenario. Mostly I see people complimenting him on doing a good job.

      I can see the point you are making though. Slippery slope stuff.


      -Paul
      Curiously enough, I've an engineer friend who can't seem to hear any difference between any two things until one produces measurements that prove there is a difference - and all of a sudden, he hears the exact same difference everyone else has been pointing out all along. I know this sounds like a story I just made up, but it's not. It's kind of hard to take an otherwise intelligent person seriously when they behave like that. But then, I know a bunch of wine lovers who are well-neigh unable to answer a seemingly simple question such as if they like the wine that's being poured before its identity and Parker rating has been revealed. Same thing, as in either case seemingly insufficient information yields to seemingly superior information. The emphasis, needless to emphasize, being on "seemingly" either way.

      Greetings from Switzerland, David.
    1. Paul.Raulerson's Avatar
      Paul.Raulerson -
      I'm sorry, but I just cannot see the same value you do with listening to identical files on the same hardware and OS.