Jump to content
  • mitchco
    mitchco

    JRiver Mac vs JRiver Windows Sound Quality Comparison

    thumb-250px.pngI have been listening to JRiver Media Center on Windows for almost two years and have been a happy customer. JRiver on Windows is extensively reviewed by Chris.

    Now that an early release of JRiver is available on the Mac, I thought I would take the opportunity to compare the sound quality between the two JRiver music players.

    Similar to how I compared JRiver to JPlay, I am using the following test methods and tools to compare SQ:

     

    • Using Audacity (or any digital audio editing software) to digitally record the output from JRiver on both Mac and Windows. Then by editing and lining up the track samples, inverting one of the tracks, and mixing them together, we will see what audio signal is left over (i.e. the difference file) and whether it is subjectively audible.
    • Using Audio DiffMaker, that is purpose built software for audio differencing tests, to analyze the two recordings, which also produces a difference file that can be listened to and subjectively evaluated.
    • Using Foobar’s ABX Comparator to listen to each recorded track and determine which one sounds different or subjectively better.[PRBREAK][/PRBREAK]

    The Audacity recordings of the JRiver music players on both Mac and Windows are included in this article so people can download and subjectively listen to and objectively inspect. Given that the test software is freeware, I designed the article to follow a step by step process, so if inclined, one can repeat the test procedures and see if the results are repeatable.

     

    1-Pixel.png

     

    opening.png

     

    1-Pixel.png

     

     

     

    Test Configuration and Recording Process

     

    The Windows computer is an Intel 3.30 GHz i5-2500 quad core with 8 GB of RAM and running Windows 7 64-bit operating system. The MacBook Pro is an Intel 2.26 GHz Core 2 Duo with 8 GB of RAM and running OSX version 10.8.2. On Windows, I am using the ASIO version of Audacity and on the Mac version 2.0.3 to record the audio bitstream from JRiver. For a DAC, I am using a Lynx Hilo, which by one objective measure, rates as one of the most transparent A/D D/A converters on the market today. The Hilo has the capability to patch (sometimes called digital loopback or route) any input to any output. As confirmed on the Lynx support forum, the audio bitstream is going from JRiver, through the ASIO driver, through the USB cable, into the Hilo, and then clocked back out the Hilo, through the USB cable, through the ASIO driver, and into Audacity. I am routing the output of JRiver to input USB Play 1&2 on the Hilo and patching it to output on USB Record 1&2 which is the input to Audacity.

     

    Looks like this configured on Hilo’s touch screen:

     

    image2.png

     

     

     

     

    With the Hilo I can simultaneously play audio from one software application (e.g. JRiver) and record the same audio in another application (e.g. Audacity). On Windows, it looks like this:

     

    image3.png

     

     

    On the Mac, it looks like this:

     

    image4.png

     

     

    I am using Tom Petty’s song Refugee that I downloaded directly from Tom Petty’s site which is recorded at 24/96. The Producer/Engineer’s provided a note of provenance ex.png (PDF) to go with the download, so I feel reasonably comfortable that this is as close to the master as one can get: “We made the FLAC files from high-resolution uncompressed 24-bit 96K master stereo files. When we compared those files to the FLAC’s, the waveforms tested out to be virtually identical.”

     

    In Audacity, the only change I made was to set the project sample rate to 96 kHz and bit-depth to 24 under the Edit menu->Preferences->Quality. Dither will be discussed later.

     

    Note that the “bit-perfect” light image5_2.png is on in both versions of JRiver while I was recording, indicating that the output of the player is streaming bit-perfect audio at 24/96 to the DAC. There is nothing else in the signal path, all DSP functions were turned off, and with ASIO, any intermediate audio layers in Windows are bypassed. All levels were set at 0dBFS and I used the stock USB cable that came with the Hilo.

     

    Here is what the Windows recording looks in Audacity:

     

    image6.jpg

     

    Here is what the Mac recording looks like:

     

    image7.png

     

     

    I used Audacity’s Amplify effect to validate that both recordings were recorded at the same level. Note I did not apply the amplification, this is for viewing only. On my first Windows recording, I accidentally moved the JRiver internal volume control down -0.5 dB, so the levels did not match. I did not find that out until the end and had to rerecord the Windows version. With everything set at 0 dBFS on JRiver, Audacity, and the Hilo, on both PC and Mac, the recorded levels should be exact, as depicted above. Uses the Amplify window to validate the recorded levels are the same before moving on to the next step.

     

     

    The Editing Process

     

    I recorded the full length of Refugee on both Mac and Windows using Audacity. First I clicked on Record in Audacity and then Play in JRiver. Once the song had played, click stop in Audacity and save the project to disk. I copied the Mac Audacity project file (.aup) and data files onto my PC and opened it up with Windows Audacity version. For waveform sample comparisons, I edited both the Mac and Windows recorded versions to roughly 60 seconds and tried to ensure that I edited the start of each track at the same sample. Windows version on the left, Mac version on the right, and I have zoomed way in to see each individual sample and placing the selection tool at the same sample point for each track:

     

    image8.jpg

     

     

    5,760,000 discrete samples is a good enough sample size to compare waveforms. If there is an opportunity for human error, it is in editing the start of each recording so they line up at the individual sample level.

     

    Resizing the waveform display windows also draws the data differently and makes it hard to properly edit. It took me more than a few tries to get it right and in the end I reverted to having the two editors opened side by side, like above. Pick a reference point and count the samples to get them aligned. One sample off will show up in the test.

     

    Now that I have lined the samples up, I can shift select everything to the left of the cursor, and using the cut tool, remove the samples:

     

    image9.jpg

     

     

    Now I can enter in 5,760,000 samples in the Selection Start field and shift select to the end of the recording. Finish by clicking on the cut (scissors) tool:

     

    image10.jpg

     

    Now I have exactly 5,760 discrete samples to export to disk:

     

    image11.jpg

     

     

    I followed the same process for the Mac version of the recording.

     

     

     

    The Comparison Process – Audacity

     

    Now that each of the Windows and Mac recorded samples have been digitally edited to be the exact same number of samples and hopefully the same start and end points, I can use this simple procedure to compare the two recorded tracks:

     

    • Import copies of both files into the same Audacity project.
    • Highlight one of the tracks, and under the Effects menu, select Invert.
    • Now highlight both tracks and under the Tracks menu, select Mix and Render. What’s left will be any difference between the two sets of recorded tracks. Save to disk.

     

    Here are both tracks loaded into Audacity, the top one is the Windows recorded version and the bottom one is the Mac recorded version:

     

    image12.jpg

     

     

    Next is highlighting and inverting one of the tracks:

     

    image13.jpg

     

     

    Finally, choose Mix and Render from the Tracks menu:

     

    image14.jpg

     

     

    This is the difference:

     

    image15.png

     

     

    No difference. Ah, but you may notice something. While I inverted one track, and highlighted both tracks, I did not mix and render, I went straight to plot spectrum.

    If I mix and render, then plot spectrum, I get:

     

    image16.png

     

     

    Note the microscopic signal that is -144 dB at 48 kHz. I do have dither turned off as per Audacity’s recommendation for export. However, reading over their lengthy description, there appears to be opportunity for inaccuracies. Additionally, looking at Audacity’s bit-depth recommendations, I should have left the default recording quality at 32-bit float and not 24 bit as -144dB is the theoretical signal to noise limit for a 24-bit digital media file. In the end, it is a moot point as -144 dB is below our absolute threshold of hearing. What does this mean? The Audacity difference test indicates that any sound quality difference between JRiver Mac and JRiver Windows is inaudible. Even if the measured difference was considerably more, say -120 dB, it may be barely audible, with headphones on, with the volume at maximum, and in a very quiet environment. However, it would be completely masked at regular program levels (e.g. -3 dBFS). If one wants to try to determine his/her ability to hear masking, try Ethan Winer’s, Artifact Audibility Comparisons.

     

    To verify the differencing process, here is a “control” sample of following the exact same procedure as above, but comparing the same file, in this case, the Mac file to itself:

     

    image17.png

     

    No difference. Check.

     

     

     

    The Comparison Process – Audio DiffMaker

     

    Audio DiffMaker is purpose built software specifically designed to automate what was done manually above as can be seen in its workflow:

     

    image18.gif

     

     

    Furthermore, the differencing algorithms for time alignment and amplitude matching are optimized for this type of testing. The Help file is an excellent resource as is the AES paper on the subject of difference testing, along with Power Point slides. I am not going to go into detail as the software is readily available (i.e. free). I have also used the software in a few of my blog posts on CA, which go into more detail about test setups, software usage, and tool issues to work around.

     

    The process is the same as Audacity’s, except all one needs to do is load the 60 second recorded tracks, click the extract button, and watch the software work for about 10 seconds:

     

    image19.png

     

    Rather than trying to explain what is meant by correlation depth, one can read up on that in the DiffMaker links provided earlier in the article. If I take the DiffMaker generated difference file, open it up in Audacity, and take a frequency analysis:

     

    image20.png

     

     

    It is identical to the Mix and Render version of Audacity screen shot, right down to the decimal place. Note that Audacity opens it up as 32 bit float, yet it is a signed 24 bit PCM file. From Audacity’s dither article, there could be a very small error introduced. I should note that the first 180 milliseconds of the difference file has been edited out as that is the time it takes DiffMakers algorithms to find the correlated depth and leaves it’s processing in the file.

     

    As before with the Audacity test, here is the control measurement of using the same file to compare to verify the process. This time I compared the Windows recorded version to itself in DiffMaker:

     

    image21.jpg

     

     

    As can be seen, compares perfectly to the maximum correlation depth of 300 dB that DiffMaker is capable of. Opening the control difference file in Audacity:

     

    image22.png

     

    No difference. Check.

     

     

    Foobar ABX Tests

     

    I went into this with an expectation bias knowing there are no audible differences as verified by the two previous tests. As much as I wanted to hear a difference, both the Mac and Windows versions sound identical to me. I ran several passes in the ABX Comparator, but there is no point in posting any results as I was guessing close to 100% of the time. However, here are the two 60 second recordings of JRiver Mac and JRiver Windows so anyone can compare the files both subjectively and objectively.

     

    JRiver Mac 60s 33MB ex.png

     

    JRiver Windows 60s 33MB ex.png

     

     

     

    Bonus Comparison

     

    While I had everything set up on JRiver on the Mac, I thought I would try swapping only one thing in the audio signal chain and compare that to the recording I have already made on the Mac. So I swapped out the USB cable, changed nothing else, and made another recording:

     

    image23.png

     

     

    The one of the left is a 6ft cable that came with the Hilo. The one on the right is a London Drugs special, 5 meter shielded USB cable for $29.95. According to the Lynx Hilo manual, the longest USB cable to be used is 15 feet. The one under test here is 16 feet. Using the same Audacity procedure as before, but comparing the Mac recording with another Mac recording with the only difference being the USB cable:

     

    image24.png

     

     

    No difference. Again, this is by loading the two different recordings, inverting one of the tracks, selecting both sets of tracks, and plotting the frequency analysis.

    If I apply the Mix and Render, I get exactly the same result as both the Audacity and DiffMaker versions with the microscopic -144 dB signal at 48 kHz. And if I run the same test in DiffMaker, exactly the same result as the previous DiffMaker test between the Mac and Windows versions. As to why other folks are possibly hearing a difference using different USB cables? One anecdote is from Ken Pohlmnan’s excellent book, “Principles of Digital Audio”:

     

    image25.jpg

     

     

    Audiophiles have sometimes reported hearing differences between different kinds of digital cables. That could be attributed to a D/A converter with a design that is inadequate to recover a uniformly stable clock from the input bitstream. But, a well-designed D/A converter with a stable clock will be immune to variations in the upstream digital signal path, as long as data values themselves are not altered.

     

    As an aside, if I was to recommend one book on understanding all of the facets of digital audio today, it would be Ken’s book. The first edition appeared in 1985, and now in its sixth edition, spans 28 years of industry knowledge. There is probably no-one that knows more about digital audio than Ken Pohlmann and that knowledge is captured in his book. Highly recommended if you wish to pursue a University level understanding of how digital audio works.

     

    Here is the 60 second recording on the Mac using the long USB cable:

     

    JRiver Mac USB long cable 60s 33MB ex.png

     

     

     

     

    Conclusion

     

    Based on three different test methods, that I repeated more than a few times, the results indicate there is no measurable or audible sound quality difference between JRiver on the Mac versus JRiver on Windows. One could argue all I did was validate what is already known, that everything is operating to specifications. In other words, bit-perfect:

     

    In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough. The data stream (audio/video) will remain pure and untouched and be fed directly without altering it. Bit-perfect audio is often desired by audiophiles.

     

    As to the reasons why this is, and if interested, I recommend Ken’s, Principles of Digital Audio book. Check out the TOC. What I really like is that the Sampling Theorem is an Appendix and the rest of the 800 pages cover literally every aspect of digital audio in every industry. Digital Audio is much more than a sampling theorem. If interested, anyone can use the same (or similar) software tools and this process to validate that the results are repeatable. One could use the files supplied, or if the DAC used supports the capability to digitally playback and record independently, one could start from scratch and validate the results are repeatable. However, I would recommend making the recordings at the maximum resolution (32-bit float in Audacity) to avoid any math or accuracy discrepancies that may occur at 24-bit resolution.

     

    In the meantime, enjoy the same JRiver Media Center sound quality whether on PC or Mac.

     

     

    1-Pixel.png

     

     

     

     

     

     

     

    About the author

     

     

    Mitch-200.jpgMitch “Mitchco” Barnett

    I love music and audio. *I grew up with music around me as my Mom was a piano player (swing) and my Dad was an audiophile (jazz). *At that time Heathkit was big and my Dad and I built several of their audio kits. *Electronics was my first career and my hobby was building speakers, amps, preamps, etc., and I still DIY today ex.png. *I also mixed live sound for a variety of bands, which led to an opportunity to work full-time in a 24 track recording studio. *Over 10 years, I recorded, mixed, and sometimes produced ex.png over 30 albums, 100 jingles, and several audio for video post productions in a number of recording studios in Western Canada. This was during a time when analog was going digital and I worked in the first 48 track all digital studio in Canada. Along the way, I partnered with some like-minded audiophile friends, and opened up an acoustic consulting and manufacturing company. *I purchased a TEF acoustics analysis computer ex.png which was a revolution in acoustic measuring as it was the first time sound could be measured in 3 dimensions. *My interest in software development drove me back to University and I have been designing and developing software ex.png ever since.*

     

     

     

     

     

     

     

     

     

     

    1-Pixel.png

     

     

    1-Pixel.png




    User Feedback

    Recommended Comments



    I've long wanted to have CA meet-ups much like the head-fi members have been doing for years.

     

    To pull this off we'd really need the info on member's locations, to see if a quorum could be had. And then, where the real interest would be.

     

    Many high end dealers do sessions with customers. However, these are almost always synced with their retail brand engagement. It's a fine venue, but limited. And those few retailers who are selling into the computer source space rarely venture beyond DACs, or complete (e.g. Sonos) setups.

     

    It's a bit surprising, though the whole area of ripping to disk is still a bit of a legal issue for the industry, though it goes on nonetheless. We'd be having "clandestine" sessions, it would seem...

    Share this comment


    Link to comment
    Share on other sites

    ??Huh? A shift is an 8 hour work schedule. The folks on second shift better darn well build the same spec'd engine or they'll lose their jobs.

     

    When you go to analog domain, like the engine is. There are manufacturing tolerances. And even the safely guarded kilogram models have been drifting over the years (some µg). So in the end it's all about measurement resolution.

     

    When we deal with symbolic presentation like mathematical formulas or replicating integer numerical values we can gain perfect representations. But when you end up with irrational numbers as result of calculations in real life, you have to decide to stop writing out the decimals at some point. How many decimals of pi are enough for you is your decision, but it won't be perfect.

     

    So checking if you can transfer data correctly in digital domain is of course fun and elementary exercise, but when you want to see if the analog conversion of digital samples is always the same, finding the difference is only about measurement resolution. At least the thermal noise won't have full correlation on two subsequent runs. (two independent sets of thermal noise shouldn't have normalized correlation factor of 1)

    Share this comment


    Link to comment
    Share on other sites

    Sorry I misunderstood "shift" I assumed (wrongly) you were saying something about gearboxes (I believe American's refer to a manual gearbox as a stick shift?). Please ignore my comments about the engine though car analogies usually seam very limited to me and virtually pointless.

     

    I'm sorry you feel I was belittling you; I do agree there is a big "ahh yes but" in Mitch's testing; but at the end of the day it appears like you (and several others) were getting away with "yes but how does it sound" type comments when "ahh but how does it measure" comments are ridiculed and put down on threads that start with purely subjective observations.

     

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    I thought Mitch was quite clear that he could not hear any differences when he listened in the article? That's why this is totally un-arguable, even though one may disagree. Mitch both measured to the best of not insignificant ability, and then listened.

     

    That he heard no difference at all is startling to me, but not arguable. He heard what he heard.

     

    I can see where it is a bit on the edge, but I don't see anyone sneering at him, calling him names, or casting aspersions on his character, which happens more often than not in the reverse scenario. Mostly I see people complimenting him on doing a good job.

     

    I can see the point you are making though. Slippery slope stuff.

     

     

    -Paul

     

     

    Sorry I misunderstood "shift" I assumed (wrongly) you were saying something about gearboxes (I believe American's refer to a manual gearbox as a stick shift?). Please ignore my comments about the engine though car analogies usually seam very limited to me and virtually pointless.

     

    I'm sorry you feel I was belittling you; I do agree there is a big "ahh yes but" in Mitch's testing; but at the end of the day it appears like you (and several others) were getting away with "yes but how does it sound" type comments when "ahh but how does it measure" comments are ridiculed and put down on threads that start with purely subjective observations.

     

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    Paul, he only listened (subjectively) to both recordings on one platform, through foobar ABX (which only runs on Windows). If I'm wrong here, then delete all my Mitch questions and Audio_elf arguments. Sorry.

    Share this comment


    Link to comment
    Share on other sites

    Had to go back and re-read the article, and yes, you are correct. So we can say that the recorded output from the Mac and the PC, when played back on the PC, sounded the same.

     

    Not the most reasonable test for saying that the two systems, JRMC on Windows and JRMC on Mac sound the same. Definitely some compelling evidence that they put out the same digital data when recorded though. ;)

     

    -Paul

     

     

    Paul, he only listened (subjectively) to both recordings on one platform, through foobar ABX (which only runs on Windows). If I'm wrong here, then delete all my Mitch questions and Audio_elf arguments. Sorry.

    Share this comment


    Link to comment
    Share on other sites

    Thanks for everyone's comments. Btw, did folks download and listen to the files on whatever system and hear any differences? I see no comments about that.

     

    Just to clear up a couple of points. I did indeed listen, casually over headphones, to JRMC on the Mac and then on Windows. Heard no differences. However, it was not a properly setup ABX test as I need some hardware switching to make that work proper. So I did not include in the results.

     

    Setting up proper, controlled listening tests are difficult as levels need to be matched to 0.1 dB is one of many considerations. Another point is the interesting research by James JJ Johnston that indicates our short term hearing memory is accurate for about 1/4 a second. Hence the requirement to be able to switch reasonably rapidly between sources.

     

    Finally, if the bits are identical, as the results show, and that anyone can replicate (anyone up for the challenge?), at the DAC whether it came from JRMC on the Mac or Windows, then how can there be an audible difference? It can't be from the music players...

     

    I am looking for a listening/measuring test, which is repeatable and can be replicated by anyone, like I did with the tests above, which demonstrates an audible difference that anyone can hear. Let me know what you think that test might look like...

    Share this comment


    Link to comment
    Share on other sites

    I thought Mitch was quite clear that he could not hear any differences when he listened in the article? That's why this is totally un-arguable, even though one may disagree. Mitch both measured to the best of not insignificant ability, and then listened.

     

    That he heard no difference at all is startling to me, but not arguable. He heard what he heard.

     

    I can see where it is a bit on the edge, but I don't see anyone sneering at him, calling him names, or casting aspersions on his character, which happens more often than not in the reverse scenario. Mostly I see people complimenting him on doing a good job.

     

    I can see the point you are making though. Slippery slope stuff.

     

     

    -Paul

     

    Curiously enough, I've an engineer friend who can't seem to hear any difference between any two things until one produces measurements that prove there is a difference - and all of a sudden, he hears the exact same difference everyone else has been pointing out all along. I know this sounds like a story I just made up, but it's not. It's kind of hard to take an otherwise intelligent person seriously when they behave like that. But then, I know a bunch of wine lovers who are well-neigh unable to answer a seemingly simple question such as if they like the wine that's being poured before its identity and Parker rating has been revealed. Same thing, as in either case seemingly insufficient information yields to seemingly superior information. The emphasis, needless to emphasize, being on "seemingly" either way.

     

    Greetings from Switzerland, David.

    Share this comment


    Link to comment
    Share on other sites

    I'm sorry, but I just cannot see the same value you do with listening to identical files on the same hardware and OS. :)

    Share this comment


    Link to comment
    Share on other sites

    Curiously enough, I've an engineer friend who can't seem to hear any difference between any two things until one produces measurements that prove there is a difference - and all of a sudden, he hears the exact same difference everyone else has been pointing out all along. I know this sounds like a story I just made up, but it's not. It's kind of hard to take an otherwise intelligent person seriously when they behave like that. But then, I know a bunch of wine lovers who are well-neigh unable to answer a seemingly simple question such as if they like the wine that's being poured before its identity and Parker rating has been revealed. Same thing, as in either case seemingly insufficient information yields to seemingly superior information. The emphasis, needless to emphasize, being on "seemingly" either way.

     

    Greetings from Switzerland, David.

     

    And it's even worse with musicians who seemingly listen to music in ways non-musicians don't. Sound is sometimes valued secondarily to many other attributes of expression. And I suppose it's also quite different based on the instrument being used, whether bowed string or percussive piano, for example. And better still, if a person is intimately familiar with the recording, or better yet, was part of the engineering process, the closer they can get to a valid judgement.

     

    Regarding your wine metaphor: the taste/smell of wine is heavily influenced not only by expectations but by what we're consuming in the process. In my case, and likely as not many others, we're at a wine tasting in the absence of a great meal in a great setting with close friends. All of these will impact on our perception of a wine that, in absence of these elements, would be pedestrian, at best.

     

    At least for me, a great music reproduction system should fool me into believing I'm in the room with real musicians, in a real venue. Even passing by such a room should make me think there are actual musicians and singers in there. The more I'm fooled the better.

    Share this comment


    Link to comment
    Share on other sites

    And it's even worse with musicians who seemingly listen to music in ways non-musicians don't. Sound is sometimes valued secondarily to many other attributes of expression. And I suppose it's also quite different based on the instrument being used, whether bowed string or percussive piano, for example. And better still, if a person is intimately familiar with the recording, or better yet, was part of the engineering process, the closer they can get to a valid judgement.

     

    Regarding your wine metaphor: the taste/smell of wine is heavily influenced not only by expectations but by what we're consuming in the process. In my case, and likely as not many others, we're at a wine tasting in the absence of a great meal in a great setting with close friends. All of these will impact on our perception of a wine that, in absence of these elements, would be pedestrian, at best.

     

    At least for me, a great music reproduction system should fool me into believing I'm in the room with real musicians, in a real venue. Even passing by such a room should make me think there are actual musicians and singers in there. The more I'm fooled the better.

     

    In that case I highly recommend trying something I used to show people when I still built loudspeakers: turn off or dim all electric light and light a candle - you'll swear the bits are not the same!

     

    Greetings from Switzerland, David.

    Share this comment


    Link to comment
    Share on other sites

    Thanks for everyone's comments. Btw, did folks download and listen to the files on whatever system and hear any differences? I see no comments about that.

     

    Just to clear up a couple of points. I did indeed listen, casually over headphones, to JRMC on the Mac and then on Windows. Heard no differences. However, it was not a properly setup ABX test as I need some hardware switching to make that work proper. So I did not include in the results.

     

    Setting up proper, controlled listening tests are difficult as levels need to be matched to 0.1 dB is one of many considerations. Another point is the interesting research by James JJ Johnston that indicates our short term hearing memory is accurate for about 1/4 a second. Hence the requirement to be able to switch reasonably rapidly between sources.

     

    Finally, if the bits are identical, as the results show, and that anyone can replicate (anyone up for the challenge?), at the DAC whether it came from JRMC on the Mac or Windows, then how can there be an audible difference? It can't be from the music players...

     

    I am looking for a listening/measuring test, which is repeatable and can be replicated by anyone, like I did with the tests above, which demonstrates an audible difference that anyone can hear. Let me know what you think that test might look like...

     

    Before thinking about tests, I did want to briefly discuss as a preliminary matter your cite to James Johnston above regarding short term hearing memory. If short term hearing memory is indeed limited to about 1/4 second, then it seems to me one can't be talking about a requirement to switch "reasonably rapidly," but rather a two-fold requirement: (1) To be able to switch nearly instantaneously; and (2) to be able to hold in our memories, with absolute reliability (an absolute reliability that doesn't exist in normal human memory), the only accurate impressions of sonic differences available to us, those from the first .25 second after switching. Frankly, I don't see how that is possible, and thus what good ABX testing would do in terms of collecting scientifically reliable data.

     

    On the other hand, if asked to distinguish in a blind test between a table radio and the New York Philharmonic within an hour or even a week of having last heard them, I don't think anyone here would have any doubt. In fact I'm not sure memory would be involved - don't you think you could tell which was which the moment you first heard one or the other? So perhaps what is involved here has less to do with short term hearing memory, which if James Johnston is correct will be unreliable in any implementation of ABX testing that does not instantaneously take the data of the sense impression (something like audio ABX testing in an fMRI scanner), and more to do with how large the difference is.

     

    Having cheerfully cast doubt on the idea of any comparative listening test (including ABX testing) being scientifically reliable, and being also quite skeptical of the notion that we yet know how to measure everything that affects our perception of realism in sound (see for example, http://arxiv.org/pdf/1301.0513.pdf, on what we can hear versus what measurements based on Fourier transforms can show), let me go on to suggest some tests and measurements. :) But why am I doing this if it's not going to give scientifically reliable results? Because I think we ought to be willing to be humble enough to let go of the idea that we are "doing science" here. I think we are following hunches that interest us and having fun along the way. Following our natural curiosity and listening to good music - what could be better, right? Methodological discipline and skepticism are powerful tools that we ought to use to full advantage, but let's not take ourselves too seriously and forget the enjoyment that is the basis of the hobby we share.

     

    In suggesting tests I'm going to have to go with what I'm familiar with, and I'm not familiar at all with JRiver. Thus it will be understandable if what I suggest isn't something you care to do. If you want to go ahead, I'd love to read about what you found, positive or negative.

     

    My suggested test involves XXHighEnd (XXHE), which is Windows software. (The general user consensus at this point is that it sounds best on Windows 7 rather than 8.) You can perform the test on the trial version, though the licensed version opens up capabilities that I subjectively think might allow you to hear (and measure?) differences to a greater degree. XXHE involves some system-dependent setup; if you'd like to go ahead with this, feel free to PM me for setup suggestions.

     

    XXHE is a memory player. One of its settings is "SFS" (Split File Size), showing, in megabytes, the size of the "chunks" of the music file that will be pre-loaded into RAM. The test is this: Have a friend or family member (or someone who's both ;-) switch between low and high SFS settings, say 2 and 430. (Sorry, this can't be done on the fly - obvious when you think about it - so you'll have to stop XXHE and hit Play again to switch.) This won't alter the bits. The only thing that will change will be how much of the music file is pre-loaded into memory at a time.

     

    See if you can hear any difference. If you can, think about what tests might be good to try to measure the types of differences you are hearing.

     

    It seems to me that if something as seemingly trivial as how much of a file at a time is pre-loaded into memory can have an audible (and measurable? - my guess is that PeterSt the developer has measurement techniques for this, but how much he'd be willing to divulge of his intellectual property in this regard is another matter) effect, then it can be conceded that major changes like an entirely different OS could have an effect as well on the sound of a software player.

    Share this comment


    Link to comment
    Share on other sites

    In that case I highly recommend trying something I used to show people when I still built loudspeakers: turn off or dim all electric light and light a candle - you'll swear the bits are not the same!

     

    Greetings from Switzerland, David.

    Ah yes, when bits become waves. Though I'll be in the dark, I'll feel it course through my bones...

    Share this comment


    Link to comment
    Share on other sites

    I thought Mitch was quite clear that he could not hear any differences when he listened in the article? That's why this is totally un-arguable, even though one may disagree. Mitch both measured to the best of not insignificant ability, and then listened.

     

    That he heard no difference at all is startling to me, but not arguable. He heard what he heard.

     

     

     

    -Paul

     

    That how I read it....

    Share this comment


    Link to comment
    Share on other sites

    Thanks for everyone's comments. Btw, did folks download and listen to the files on whatever system and hear any differences? I see no comments about that.

     

    Just to clear up a couple of points. I did indeed listen, casually over headphones, to JRMC on the Mac and then on Windows. Heard no differences. However, it was not a properly setup ABX test as I need some hardware switching to make that work proper. So I did not include in the results.

     

    Setting up proper, controlled listening tests are difficult as levels need to be matched to 0.1 dB is one of many considerations. Another point is the interesting research by James JJ Johnston that indicates our short term hearing memory is accurate for about 1/4 a second. Hence the requirement to be able to switch reasonably rapidly between sources.

     

    Finally, if the bits are identical, as the results show, and that anyone can replicate (anyone up for the challenge?), at the DAC whether it came from JRMC on the Mac or Windows, then how can there be an audible difference? It can't be from the music players...

     

    I am looking for a listening/measuring test, which is repeatable and can be replicated by anyone, like I did with the tests above, which demonstrates an audible difference that anyone can hear. Let me know what you think that test might look like...

     

    mitch, can you respond to my simple request to do the same test with other software on the same platform? for example, itunes vs jrmc.

     

    Do let me know if this isn't feasible for some reason..

    Share this comment


    Link to comment
    Share on other sites

    Here’s a very simple test. Take a single track, play it through JRIVER and then again on the same hardware through JRIVER (via JPLAY). Both send the same bit-perfect digital stream to the DAC but using different methods. They sound different. Quite different.

     

    Then do a blind listening test and ask the listeners to identify which path the track was played through, path A (standalone JRIVER) or path B (JRIVER+JPLAY). The listeners will be able to tell with some certainty which playback path was used each time. Equally you could do the same test, JRIVER vs. Foobar. Make it easy for the listeners and choose the track carefully, as some recordings will sound more markedly different than others. It also depends on the rest of the audio chain being setup correctly, especially ensuring that the speakers are properly mounted, otherwise the listeners stand less chance of being able to hear differences.

     

    I don’t think sound differences in the playback chain are subjective. You either hear a difference or you don’t. Whether you like the way a particular system sounds in subjective. The test above could be repeated using the same player on different hardware, MAC vs. PC for instance.

    Share this comment


    Link to comment
    Share on other sites

    I guess my point is that you need to use human ears to judge differences in sound quality. I can't imagine looking at an audio waveform in Audacity on a computer screen can possibly show up subtle differences in sound quality. Neither can looking at 1s and 0s, as there are many other factors that influence the final analogue signal, such as jitter in the digital domain and RFI/EMI noise etc.

    Share this comment


    Link to comment
    Share on other sites

    ... the only accurate impressions of sonic differences available to us, those from the first .25 second after switching. Frankly, I don't see how that is possible, and thus what good ABX testing would do in terms of collecting scientifically reliable data.

     

    Very good point, Jud. The James JJ Johnston link provided by mithco doesn't link to the source of the .25 seconds, but assuming that he is quoted correctly (which I don't doubt), this basically invalidates ABX testing as it is usually performed. Test signals would have to consist of only .25 second snippets of music. Maybe it would work, but I doubt it.

    Share this comment


    Link to comment
    Share on other sites

    Very good point, Jud. The James JJ Johnston link provided by mithco doesn't link to the source of the .25 seconds, but assuming that he is quoted correctly (which I don't doubt), this basically invalidates ABX testing as it is usually performed. Test signals would have to consist of only .25 second snippets of music. Maybe it would work, but I doubt it.

     

    In fact, 0,125 second snippets ...

     

    Moreover, I should say that I don't doubt the .25 s claim if the test signals in question are very similar. As you also point out, Jud, when test signals are sufficiently different, our auditory memory may last weeks, months, and years.

     

    Another interesting case where the .25 s claim obviously isn't valid is when it comes to voice recognition. Even on the crappiest table radio you can always recognize, say Paul McCartney's voice. But I should hurry to say that I don't know if the .25 s claim was meant to pertain to any of these situations--the JJ Johnston link in mitchco's post didn't link directly to the source (and I don't trust Ethan Winer as a source).

    Share this comment


    Link to comment
    Share on other sites

    I guess my point is that you need to use human ears to judge differences in sound quality. I can't imagine looking at an audio waveform in Audacity on a computer screen can possibly show up subtle differences in sound quality. Neither can looking at 1s and 0s, as there are many other factors that influence the final analogue signal, such as jitter in the digital domain and RFI/EMI noise etc.

     

    Agreed.....as there's so many functions in a complex waveform, our brains couldn't possibly interpret their electrical waveforms in a comprehensive way no less recreate them through audible memory for comparison.

     

    That being said, properly conducted AB testing of sufficient subject numbers can with certainty determine if an audible difference exists. But when faced with these results, more often than not the results are discredited by the audiophile community....some of which a actually conduct these tests. So really if boils down to faith, and a dedication to what the audiophile community stands for......powerful stuff and often beyond reason.

     

    One could also consider the power of passion....for what one holds dear. We as humans make horrible mistakes of an unimaginable frequency where matters of the heart are concerned.....and will continue to do so as we are still....um....human. But it's nice to have some science around to advance the cause and keep us grounded. Modern day heretics the objectivist may be, but without passion and art, life is lifeless.

     

    Enjoy the music...whether it costs $100 or $100 cubed.

    Share this comment


    Link to comment
    Share on other sites

    Agreed.....as there's so many functions in a complex waveform, our brains couldn't possibly interpret their electrical waveforms in a comprehensive way no less recreate them through audible memory for comparison.

     

     

    Bulls**t! I don't need a panel of people or an ABX switch to tell me that the differences I hear are real. My system is resolving enough and my experience in the pursuit of musical reality is more than plenty to be able to make decisions about "better 1?", "better 2?" It can be s/w players, upsampling filter settings, a resistor type, a capacitor type, a particular mastering of a recording, operating system versions, or whether the draperies should be open or closed a few more inches.

     

    All of these constant long threads about if people can hear this or that change have become so tiring. Frankly I think those who doubt the ability of the human ear either: a) don't have a system--or room acoustic--evolved enough to resolve what are admittedly sometimes subtle changes; b) don't have enough of a connection to the feeling that well performed/recorded music can evoke; or c) are not listening at all and are just arguing based on what they think/wish should be perfect theory.

     

    How do I reach the above conclusions? Easy. I just listen. With my ears, my heart, and my experience. It is not that hard--at least for me as I have been doing this my whole life. I am not an audio engineer (though I am close with many), I not a musician (though I have lived and been lifelong friends with several), I am just deeply connected to a wide range of music and to the pursuit of a truly accurate and moving recreation of it in my home.

     

    I hope others here will ignore the rhetoric and listen for themselves. I have listened to most all of the software players and there is, to my ears and my mind, no denying that there are real and significant differences between them. Someday, someone will find a way to measure those differences, but now and always, I will use the only instrument that counts, my ears.

     

    Peace,

    ALEX

    Share this comment


    Link to comment
    Share on other sites

    That being said, properly conducted AB testing of sufficient subject numbers can with certainty determine if an audible difference exists.

     

    We actually agree, you know. Just not that the tests you refer to are (only) about testing sound differences. They are just as much testing the extent to which the test persons were able to listen to each option--A or B--without having their judgment influenced by previous trials.

    Share this comment


    Link to comment
    Share on other sites

    Someday, someone will find a way to measure those differences ...

     

    Yes, I think you're right. In fact, for the scientific curious, I think there's some gold to be dug here. A lot of new scientific findings begin with a body of anecdotal evidence large enough to warrant further inquiry, and frankly, I don't think you could trick so many audiophiles into spending such serious money on, say, power cables if it was just of matter of placebo. I'm pretty sure most of us, especially the those of us trained in science, would rather spend it on the electronics, and have only reluctantly acknowledged that we are not just fooling ourselves.

     

    Peace

    Share this comment


    Link to comment
    Share on other sites

    Bulls**t! I don't need a panel of people or an ABX switch to tell me that the differences I hear are real. My system is resolving enough and my experience in the pursuit of musical reality is more than plenty to be able to make decisions about "better 1?", "better 2?" It can be s/w players, upsampling filter settings, a resistor type, a capacitor type, a particular mastering of a recording, operating system versions, or whether the draperies should be open or closed a few more inches.

     

    All of these constant long threads about if people can hear this or that change have become so tiring. Frankly I think those who doubt the ability of the human ear either: a) don't have a system--or room acoustic--evolved enough to resolve what are admittedly sometimes subtle changes; b) don't have enough of a connection to the feeling that well performed/recorded music can evoke; or c) are not listening at all and are just arguing based on what they think/wish should be perfect theory.

     

    How do I reach the above conclusions? Easy. I just listen. With my ears, my heart, and my experience. It is not that hard--at least for me as I have been doing this my whole life. I am not an audio engineer (though I am close with many), I not a musician (though I have lived and been lifelong friends with several), I am just deeply connected to a wide range of music and to the pursuit of a truly accurate and moving recreation of it in my home.

     

    I hope others here will ignore the rhetoric and listen for themselves. I have listened to most all of the software players and there is, to my ears and my mind, no denying that there are real and significant differences between them. Someday, someone will find a way to measure those differences, but now and always, I will use the only instrument that counts, my ears.

     

    Peace,

    ALEX

     

    Yep, though I'm a bit more diplomatic about this issue. It's always good to try and use instrumentation, outside of our ear/brain combo, to help guide us. That's particularly true when tweaking our systems or trying to help us decide where to alter components in the chain. What's particularly challenging for any audiophile is that each of us is different (age, musical background, etc.), and we live in different homes with all the room issues, and have varying power companies to deal with. Some electronics are more immune to power related problems. And some amp/speaker combinations are less compatible with each other than we'd hope. Some cable/interconnect designs seem to make huge improvements in some systems, but not others.

     

    All of this, and more, along with the fact that few, if any, companies who produce audiophile class components make a full range of products that we can use end-to-end. Last time I had that was in a college dorm room with a Henry Kloss (KLH) stereo system. It was a great price performer and had been designed to deliver a great sound, though at a low level output. 'Course that was so long ago, when my ears were more capable than today, though my appreciation of music was just getting going, and my bank account wouldn't allow for what I can afford today. Still listen to a wide variety of music, old and new. And I've gotten better at very quickly knowing when I'm hearing great playback from mediocre. And it often comes down to who has set up the system. Though you can't make "a silk purse out of a sow's ear" if you know what you're doing you can get the very most out of a collection of components through proper room placement and making adjustments to room reflections and other problems.

    Share this comment


    Link to comment
    Share on other sites




    Guest
    This is now closed for further comments




×
×
  • Create New...