Jump to content
  • mitchco
    mitchco

    JRiver Mac vs JRiver Windows Sound Quality Comparison

    thumb-250px.pngI have been listening to JRiver Media Center on Windows for almost two years and have been a happy customer. JRiver on Windows is extensively reviewed by Chris.

    Now that an early release of JRiver is available on the Mac, I thought I would take the opportunity to compare the sound quality between the two JRiver music players.

    Similar to how I compared JRiver to JPlay, I am using the following test methods and tools to compare SQ:

     

    • Using Audacity (or any digital audio editing software) to digitally record the output from JRiver on both Mac and Windows. Then by editing and lining up the track samples, inverting one of the tracks, and mixing them together, we will see what audio signal is left over (i.e. the difference file) and whether it is subjectively audible.
    • Using Audio DiffMaker, that is purpose built software for audio differencing tests, to analyze the two recordings, which also produces a difference file that can be listened to and subjectively evaluated.
    • Using Foobar’s ABX Comparator to listen to each recorded track and determine which one sounds different or subjectively better.[PRBREAK][/PRBREAK]

    The Audacity recordings of the JRiver music players on both Mac and Windows are included in this article so people can download and subjectively listen to and objectively inspect. Given that the test software is freeware, I designed the article to follow a step by step process, so if inclined, one can repeat the test procedures and see if the results are repeatable.

     

    1-Pixel.png

     

    opening.png

     

    1-Pixel.png

     

     

     

    Test Configuration and Recording Process

     

    The Windows computer is an Intel 3.30 GHz i5-2500 quad core with 8 GB of RAM and running Windows 7 64-bit operating system. The MacBook Pro is an Intel 2.26 GHz Core 2 Duo with 8 GB of RAM and running OSX version 10.8.2. On Windows, I am using the ASIO version of Audacity and on the Mac version 2.0.3 to record the audio bitstream from JRiver. For a DAC, I am using a Lynx Hilo, which by one objective measure, rates as one of the most transparent A/D D/A converters on the market today. The Hilo has the capability to patch (sometimes called digital loopback or route) any input to any output. As confirmed on the Lynx support forum, the audio bitstream is going from JRiver, through the ASIO driver, through the USB cable, into the Hilo, and then clocked back out the Hilo, through the USB cable, through the ASIO driver, and into Audacity. I am routing the output of JRiver to input USB Play 1&2 on the Hilo and patching it to output on USB Record 1&2 which is the input to Audacity.

     

    Looks like this configured on Hilo’s touch screen:

     

    image2.png

     

     

     

     

    With the Hilo I can simultaneously play audio from one software application (e.g. JRiver) and record the same audio in another application (e.g. Audacity). On Windows, it looks like this:

     

    image3.png

     

     

    On the Mac, it looks like this:

     

    image4.png

     

     

    I am using Tom Petty’s song Refugee that I downloaded directly from Tom Petty’s site which is recorded at 24/96. The Producer/Engineer’s provided a note of provenance ex.png (PDF) to go with the download, so I feel reasonably comfortable that this is as close to the master as one can get: “We made the FLAC files from high-resolution uncompressed 24-bit 96K master stereo files. When we compared those files to the FLAC’s, the waveforms tested out to be virtually identical.”

     

    In Audacity, the only change I made was to set the project sample rate to 96 kHz and bit-depth to 24 under the Edit menu->Preferences->Quality. Dither will be discussed later.

     

    Note that the “bit-perfect” light image5_2.png is on in both versions of JRiver while I was recording, indicating that the output of the player is streaming bit-perfect audio at 24/96 to the DAC. There is nothing else in the signal path, all DSP functions were turned off, and with ASIO, any intermediate audio layers in Windows are bypassed. All levels were set at 0dBFS and I used the stock USB cable that came with the Hilo.

     

    Here is what the Windows recording looks in Audacity:

     

    image6.jpg

     

    Here is what the Mac recording looks like:

     

    image7.png

     

     

    I used Audacity’s Amplify effect to validate that both recordings were recorded at the same level. Note I did not apply the amplification, this is for viewing only. On my first Windows recording, I accidentally moved the JRiver internal volume control down -0.5 dB, so the levels did not match. I did not find that out until the end and had to rerecord the Windows version. With everything set at 0 dBFS on JRiver, Audacity, and the Hilo, on both PC and Mac, the recorded levels should be exact, as depicted above. Uses the Amplify window to validate the recorded levels are the same before moving on to the next step.

     

     

    The Editing Process

     

    I recorded the full length of Refugee on both Mac and Windows using Audacity. First I clicked on Record in Audacity and then Play in JRiver. Once the song had played, click stop in Audacity and save the project to disk. I copied the Mac Audacity project file (.aup) and data files onto my PC and opened it up with Windows Audacity version. For waveform sample comparisons, I edited both the Mac and Windows recorded versions to roughly 60 seconds and tried to ensure that I edited the start of each track at the same sample. Windows version on the left, Mac version on the right, and I have zoomed way in to see each individual sample and placing the selection tool at the same sample point for each track:

     

    image8.jpg

     

     

    5,760,000 discrete samples is a good enough sample size to compare waveforms. If there is an opportunity for human error, it is in editing the start of each recording so they line up at the individual sample level.

     

    Resizing the waveform display windows also draws the data differently and makes it hard to properly edit. It took me more than a few tries to get it right and in the end I reverted to having the two editors opened side by side, like above. Pick a reference point and count the samples to get them aligned. One sample off will show up in the test.

     

    Now that I have lined the samples up, I can shift select everything to the left of the cursor, and using the cut tool, remove the samples:

     

    image9.jpg

     

     

    Now I can enter in 5,760,000 samples in the Selection Start field and shift select to the end of the recording. Finish by clicking on the cut (scissors) tool:

     

    image10.jpg

     

    Now I have exactly 5,760 discrete samples to export to disk:

     

    image11.jpg

     

     

    I followed the same process for the Mac version of the recording.

     

     

     

    The Comparison Process – Audacity

     

    Now that each of the Windows and Mac recorded samples have been digitally edited to be the exact same number of samples and hopefully the same start and end points, I can use this simple procedure to compare the two recorded tracks:

     

    • Import copies of both files into the same Audacity project.
    • Highlight one of the tracks, and under the Effects menu, select Invert.
    • Now highlight both tracks and under the Tracks menu, select Mix and Render. What’s left will be any difference between the two sets of recorded tracks. Save to disk.

     

    Here are both tracks loaded into Audacity, the top one is the Windows recorded version and the bottom one is the Mac recorded version:

     

    image12.jpg

     

     

    Next is highlighting and inverting one of the tracks:

     

    image13.jpg

     

     

    Finally, choose Mix and Render from the Tracks menu:

     

    image14.jpg

     

     

    This is the difference:

     

    image15.png

     

     

    No difference. Ah, but you may notice something. While I inverted one track, and highlighted both tracks, I did not mix and render, I went straight to plot spectrum.

    If I mix and render, then plot spectrum, I get:

     

    image16.png

     

     

    Note the microscopic signal that is -144 dB at 48 kHz. I do have dither turned off as per Audacity’s recommendation for export. However, reading over their lengthy description, there appears to be opportunity for inaccuracies. Additionally, looking at Audacity’s bit-depth recommendations, I should have left the default recording quality at 32-bit float and not 24 bit as -144dB is the theoretical signal to noise limit for a 24-bit digital media file. In the end, it is a moot point as -144 dB is below our absolute threshold of hearing. What does this mean? The Audacity difference test indicates that any sound quality difference between JRiver Mac and JRiver Windows is inaudible. Even if the measured difference was considerably more, say -120 dB, it may be barely audible, with headphones on, with the volume at maximum, and in a very quiet environment. However, it would be completely masked at regular program levels (e.g. -3 dBFS). If one wants to try to determine his/her ability to hear masking, try Ethan Winer’s, Artifact Audibility Comparisons.

     

    To verify the differencing process, here is a “control” sample of following the exact same procedure as above, but comparing the same file, in this case, the Mac file to itself:

     

    image17.png

     

    No difference. Check.

     

     

     

    The Comparison Process – Audio DiffMaker

     

    Audio DiffMaker is purpose built software specifically designed to automate what was done manually above as can be seen in its workflow:

     

    image18.gif

     

     

    Furthermore, the differencing algorithms for time alignment and amplitude matching are optimized for this type of testing. The Help file is an excellent resource as is the AES paper on the subject of difference testing, along with Power Point slides. I am not going to go into detail as the software is readily available (i.e. free). I have also used the software in a few of my blog posts on CA, which go into more detail about test setups, software usage, and tool issues to work around.

     

    The process is the same as Audacity’s, except all one needs to do is load the 60 second recorded tracks, click the extract button, and watch the software work for about 10 seconds:

     

    image19.png

     

    Rather than trying to explain what is meant by correlation depth, one can read up on that in the DiffMaker links provided earlier in the article. If I take the DiffMaker generated difference file, open it up in Audacity, and take a frequency analysis:

     

    image20.png

     

     

    It is identical to the Mix and Render version of Audacity screen shot, right down to the decimal place. Note that Audacity opens it up as 32 bit float, yet it is a signed 24 bit PCM file. From Audacity’s dither article, there could be a very small error introduced. I should note that the first 180 milliseconds of the difference file has been edited out as that is the time it takes DiffMakers algorithms to find the correlated depth and leaves it’s processing in the file.

     

    As before with the Audacity test, here is the control measurement of using the same file to compare to verify the process. This time I compared the Windows recorded version to itself in DiffMaker:

     

    image21.jpg

     

     

    As can be seen, compares perfectly to the maximum correlation depth of 300 dB that DiffMaker is capable of. Opening the control difference file in Audacity:

     

    image22.png

     

    No difference. Check.

     

     

    Foobar ABX Tests

     

    I went into this with an expectation bias knowing there are no audible differences as verified by the two previous tests. As much as I wanted to hear a difference, both the Mac and Windows versions sound identical to me. I ran several passes in the ABX Comparator, but there is no point in posting any results as I was guessing close to 100% of the time. However, here are the two 60 second recordings of JRiver Mac and JRiver Windows so anyone can compare the files both subjectively and objectively.

     

    JRiver Mac 60s 33MB ex.png

     

    JRiver Windows 60s 33MB ex.png

     

     

     

    Bonus Comparison

     

    While I had everything set up on JRiver on the Mac, I thought I would try swapping only one thing in the audio signal chain and compare that to the recording I have already made on the Mac. So I swapped out the USB cable, changed nothing else, and made another recording:

     

    image23.png

     

     

    The one of the left is a 6ft cable that came with the Hilo. The one on the right is a London Drugs special, 5 meter shielded USB cable for $29.95. According to the Lynx Hilo manual, the longest USB cable to be used is 15 feet. The one under test here is 16 feet. Using the same Audacity procedure as before, but comparing the Mac recording with another Mac recording with the only difference being the USB cable:

     

    image24.png

     

     

    No difference. Again, this is by loading the two different recordings, inverting one of the tracks, selecting both sets of tracks, and plotting the frequency analysis.

    If I apply the Mix and Render, I get exactly the same result as both the Audacity and DiffMaker versions with the microscopic -144 dB signal at 48 kHz. And if I run the same test in DiffMaker, exactly the same result as the previous DiffMaker test between the Mac and Windows versions. As to why other folks are possibly hearing a difference using different USB cables? One anecdote is from Ken Pohlmnan’s excellent book, “Principles of Digital Audio”:

     

    image25.jpg

     

     

    Audiophiles have sometimes reported hearing differences between different kinds of digital cables. That could be attributed to a D/A converter with a design that is inadequate to recover a uniformly stable clock from the input bitstream. But, a well-designed D/A converter with a stable clock will be immune to variations in the upstream digital signal path, as long as data values themselves are not altered.

     

    As an aside, if I was to recommend one book on understanding all of the facets of digital audio today, it would be Ken’s book. The first edition appeared in 1985, and now in its sixth edition, spans 28 years of industry knowledge. There is probably no-one that knows more about digital audio than Ken Pohlmann and that knowledge is captured in his book. Highly recommended if you wish to pursue a University level understanding of how digital audio works.

     

    Here is the 60 second recording on the Mac using the long USB cable:

     

    JRiver Mac USB long cable 60s 33MB ex.png

     

     

     

     

    Conclusion

     

    Based on three different test methods, that I repeated more than a few times, the results indicate there is no measurable or audible sound quality difference between JRiver on the Mac versus JRiver on Windows. One could argue all I did was validate what is already known, that everything is operating to specifications. In other words, bit-perfect:

     

    In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough. The data stream (audio/video) will remain pure and untouched and be fed directly without altering it. Bit-perfect audio is often desired by audiophiles.

     

    As to the reasons why this is, and if interested, I recommend Ken’s, Principles of Digital Audio book. Check out the TOC. What I really like is that the Sampling Theorem is an Appendix and the rest of the 800 pages cover literally every aspect of digital audio in every industry. Digital Audio is much more than a sampling theorem. If interested, anyone can use the same (or similar) software tools and this process to validate that the results are repeatable. One could use the files supplied, or if the DAC used supports the capability to digitally playback and record independently, one could start from scratch and validate the results are repeatable. However, I would recommend making the recordings at the maximum resolution (32-bit float in Audacity) to avoid any math or accuracy discrepancies that may occur at 24-bit resolution.

     

    In the meantime, enjoy the same JRiver Media Center sound quality whether on PC or Mac.

     

     

    1-Pixel.png

     

     

     

     

     

     

     

    About the author

     

     

    Mitch-200.jpgMitch “Mitchco” Barnett

    I love music and audio. *I grew up with music around me as my Mom was a piano player (swing) and my Dad was an audiophile (jazz). *At that time Heathkit was big and my Dad and I built several of their audio kits. *Electronics was my first career and my hobby was building speakers, amps, preamps, etc., and I still DIY today ex.png. *I also mixed live sound for a variety of bands, which led to an opportunity to work full-time in a 24 track recording studio. *Over 10 years, I recorded, mixed, and sometimes produced ex.png over 30 albums, 100 jingles, and several audio for video post productions in a number of recording studios in Western Canada. This was during a time when analog was going digital and I worked in the first 48 track all digital studio in Canada. Along the way, I partnered with some like-minded audiophile friends, and opened up an acoustic consulting and manufacturing company. *I purchased a TEF acoustics analysis computer ex.png which was a revolution in acoustic measuring as it was the first time sound could be measured in 3 dimensions. *My interest in software development drove me back to University and I have been designing and developing software ex.png ever since.*

     

     

     

     

     

     

     

     

     

     

    1-Pixel.png

     

     

    1-Pixel.png




    User Feedback

    Recommended Comments



    I agree with you Steve

    Who is Steve - well I assume Mercman but can I make a plee than unless people sign their name on their threads that you (meaning the wider audience not specifically aimed at Paul) use their signon name not the name that you may know is their real name but others don't... Much easier to follow that way.

     

    Thanks

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    No, you are right. That was impolite of me. Sorry Merc.

    -Paul

     

     

    Who is Steve - well I assume Mercman but can I make a plee than unless people sign their name on their threads that you (meaning the wider audience not specifically aimed at Paul) use their signon name not the name that you may know is their real name but others don't... Much easier to follow that way.

     

    Thanks

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    I once invented a type of filter that, although a combination and/or derivation of existing filter types, didn't necessarily seem to make much sense except it sounded better to everyone who had a listen. This was while working on a loudspeaker project on and off for seven years. Sometime later I figured out how to reliably measure what I'd perfected in extensive listening tests, so that the next time round, I was able to finish a similar, if slightly smaller project in a mere three weeks. I won't say this taught me to trust my ears, as I'd always done that. Nor will I say it taught me to distrust measurements, as mine tend to be accurate and the results for the most part informative. Except there's no way of telling (let alone any basis of discarding) the relevance of what we cannot or will not (yet) measure, be that for lack of equipment or experience. In short, what articles on digital audio are making me wonder about is how long from now we're going to look back smiling at our own ignorance. The way things have been evolving, hopefully sooner rather than later.

     

    Greetings from Switzerland, David.

    Share this comment


    Link to comment
    Share on other sites

    There's one issue with this test.

     

    You need to play back the test signal through one DAC to analog domain, and then record it using completely independent ADC. This way it would display the possible differences of all kinds of USB interference no matter whether it is affecting the DAC's clock or analog stages.

     

    You should never use A/D/A combo device to bot play back and record if you want to check the playback quality, since it would rule out most jitter effect because both DA and AD converters are running off the same clock.

     

    I've been doing some measurements of different DACs when connected to different kinds of computers and there is varying amounts of difference depending on DAC and computer combination.

     

    And it is not actually at all certain than asynchronous USB would give lower jitter or less interference than good S/PDIF...

    Share this comment


    Link to comment
    Share on other sites

    There's one issue with this test.

     

    You need to play back the test signal through one DAC to analog domain, and then record it using completely independent ADC. This way it would display the possible differences of all kinds of USB interference no matter whether it is affecting the DAC's clock or analog stages.

     

    You should never use A/D/A combo device to bot play back and record if you want to check the playback quality, since it would rule out most jitter effect because both DA and AD converters are running off the same clock.

     

    I've been doing some measurements of different DACs when connected to different kinds of computers and there is varying amounts of difference depending on DAC and computer combination.

     

    And it is not actually at all certain than asynchronous USB would give lower jitter or less interference than good S/PDIF...

    Share this comment


    Link to comment
    Share on other sites

    Hi mitch. Appreciate all your work here. I'm not a JRiver user, and probably won't become one in the foreseeable future, but these tests are still interesting. A couple of general thoughts:

     

    - Agreed that signal/noise -120db, or even louder, will be masked. But it's not the noise I'm trying to hear. My question would be, turning things around, to what degree will noise in an extremely low range, -115 to -120db, tend to mask very low-level details of the music? Keith Johnson has said/written that noise in this range will detract from musical realism. (Yeah, I understand we're talking loudness of the approximate volume of gnats blinking. But Keith's a smart guy.)

     

    - To what degree does the inherent noise of the test equipment mask differences, particularly differences at very low levels? (This is related somewhat to what Miska is talking about.) Putting it another way, if we can run these tests with everyday computer equipment, why are companies spending tens/hundreds of thousands on finely calibrated lab and test rigs?

     

    - Ken Pohlmann's knowledge is encyclopedic on digital audio. It doesn't necessarily prevent him from having a particular point of view, however. He was often cited back in the early CD player days by folks who said CD players (or any two digital systems) could not sound different from each other. (I don't have the early versions of his book to know whether such citations were necessarily completely accurate, but I can say that I don't recall reading citations to it in support of the proposition that two digital audio systems could differ in sound.)

    Share this comment


    Link to comment
    Share on other sites

    Audio playback on a "system" is a chain from bits to ear/brain. Keeping all other things equal, the only thing that really makes sense for a user is to vary one and only one element at a time. The vast differences between using even the most current Apple hardware with OS X versus Windows 8 and the variety of hardware it can and does support can get in the way of any analysis. Each layer of hardware, firmware, drivers, let alone which player is used comes into play. Moving from a generations old Mac Mini with Mountain Lion and Amarra to a new CAPS 3 Lagoon running JRMC 18 was a real eye opener. There is no way to do a true apples-to-apples comparison of JRMC on a Mac OS X versus Windows environment. Even running Windows on a Mac will be different as the ultimate control of Windows on a Mac is dependent on how Boot Camp or any layer affects the outcome. I don't know much about how virtual software, like Parallels or VMware interact with MacOS, but I suspect that's fraught with even more questions.

     

    My own feeling is that if the objective is to garner the "best" sound from a source, then start with the simplest and cleanest line from musical bits to your ears. For me, that's now the CAPS 3 design. I'm not subject to a single vendor's hardware solution on the hardware front. And with this platform and Windows 8, and maybe a Linux/Unix solution down the road, I can hone the pipeline. The KISS principal seems best. Unless, and until, someone actually designs a better mousetrap, this is the horse I'm going to ride. Oh, and JRMC, from an end user perspective, along with JRemote, leaves Amarra in the dust. Maybe Sonic Studios will finally get off the iTunes bandwagon and do their own player. That would certainly help.

    Share this comment


    Link to comment
    Share on other sites

    Even running Windows on a Mac will be different as the ultimate control of Windows on a Mac is dependent on how Boot Camp or any layer affects the outcome. I don't know much about how virtual software, like Parallels or VMware interact with MacOS, but I suspect that's fraught with even more questions.

    Just to clarify, BootCamp isn't really a "layer", it's just a utility to enable the Apple hardware boot an OS that isn't Mac OS X. Once booted there are no more "layers" than running Windows on any other hardware.

     

    Parallels / VMWare / etc on the other hand do put another layer between the OS and the hardware...

     

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    Just to clarify, BootCamp isn't really a "layer", it's just a utility to enable the Apple hardware boot an OS that isn't Mac OS X. Once booted there are no more "layers" than running Windows on any other hardware.

     

    Parallels / VMWare / etc on the other hand do put another layer between the OS and the hardware...

     

    Eloise

     

    Well that's interesting. I was under the distinct impression that Apple provides drivers for installation of Windows versions on its platform. Though it may be quite possible, for instance, to install Windows 8 without recourse to Apple's own drivers, it's not a wise, or possibly, supported environment.

     

    Not an official source but: No support? No problem! Installing Windows 8 on a Mac with Boot Camp | Ars Technica

     

    And then there's Apple support itself: Boot Camp: Frequently asked questions about installing Windows 8

     

    So, although I don't disagree that Boot Camp is a switch, it seems to rely on having a uniquely capable Windows OS for a full working environment -- and one that Apple supports. This is as I'd expect.

    Share this comment


    Link to comment
    Share on other sites

    No, I think you are mixing up the installation and the running. Apple computers use an EFI boot bios, not well supported by Windows. There is a EFI boot loader that starts the windows Boot process, but that's it.

     

    Most of the "drivers" Apple provides are simply to enable all the hardware in each computer. Ethernet drivers, video drivers, sound drivers, etc. Nothing at all different from what any other PC provider, IBM, Dell, HP. Samsung, etc. provide.

     

    The Apple "specific" drivers are a control panel that allows you to select the boot drive, and a driver that allows you to read HFS partitions. Nothing more. And Windows runs fine without them installed.

     

    -Paul

     

     

    Well that's interesting. I was under the distinct impression that Apple provides drivers for installation of Windows versions on its platform. Though it may be quite possible, for instance, to install Windows 8 without recourse to Apple's own drivers, it's not a wise, or possibly, supported environment.

     

    Not an official source but: No support? No problem! Installing Windows 8 on a Mac with Boot Camp | Ars Technica

     

    And then there's Apple support itself: Boot Camp: Frequently asked questions about installing Windows 8

     

    So, although I don't disagree that Boot Camp is a switch, it seems to rely on having a uniquely capable Windows OS for a full working environment -- and one that Apple supports. This is as I'd expect.

    Share this comment


    Link to comment
    Share on other sites

    No, I think you are mixing up the installation and the running. Apple computers use an EFI boot bios, not well supported by Windows. There is a EFI boot loader that starts the windows Boot process, but that's it.

     

    Most of the "drivers" Apple provides are simply to enable all the hardware in each computer. Ethernet drivers, video drivers, sound drivers, etc. Nothing at all different from what any other PC provider, IBM, Dell, HP. Samsung, etc. provide.

     

    The Apple "specific" drivers are a control panel that allows you to select the boot drive, and a driver that allows you to read HFS partitions. Nothing more. And Windows runs fine without them installed.

     

    -Paul

     

    Paul has it right. Apple is just providing the drivers for their hardware. And it works very well.

    Share this comment


    Link to comment
    Share on other sites

    Chris, Mitch,

     

    First I think this is totally premature since the MAC version is in beta.

     

    Second you guys cannot declare things being the same this way!! believe me I have already been bitten in the A*** for saying that on a number of occasions.

     

    ~~~~~~~~

     

    Ok so 2 years I used the following test to work out some things about my view on USB cables and such. My lab is just freaken full to the hilt with expensive testing equipment.

     

    MacBook Pro 15 8G/256G SSD and bootcamp WIN7/OSX 10.6.8, we will call this computer.

     

    Computer<==USB Protocol Analyzer===Cable tested===>WaveLink HS USB to SPDIF converter-----SPDIF----->Prism dScope III or AP

     

    Tektronix scope looking at the EYE pattern of the USB @ the WaveLink HS. Also I have an I2S module in the scope which can decode the digital portion which was also streamed out to test points.

     

    So I could look at the data file (PCM flat, AIFF or WAV using 0xED great little app for OSX), I could look at the data on the USB Protocol Analyzer and also look at the time stamps of each packet sent to the WaveLink HS. I could look at the I2S data and the SPDIF data. I could look at USB errors, but there usually wasn't any except on some cables which will remain quiet. I could look to see if the SPDIF jitter changed I could all this in Windows and OSX and I spent weeks looking at this and nothing... nothing at all between iTunes, Pure Music, Audirvana, Decibel, Fideleo, J River and Foobar. Some apps not mentioned at the time were not bit true and of course setting up iTunes to be bit true is a pain but really not that hard.

     

    So with that said I placed a Coscant and Crimson into the same picture but this time also listened. Each applications sounded different, each had their own particular sound. As a long time drummer and guitarist to me using tracks I know or was involved in the recording process is best for me to judge. Really good vibe tracks is a dead giveaway as well as thick open picking on a well mic-ed acoustic guitar piece.

     

    I told all this to John Atkinson and Charlie Hansen of Ayre and they both agreed with me but said that maybe your time is better served making and designing stuff and maybe we are not looking at all the variables right now.

     

    So when I read this review it makes me think of how basic of testing was used here to reach a conclusion which is surely wrong.

     

    I think the MAC sounds better than the PC version when I use bootcamp in the same machine.

     

    Thanks,

    Gordon

    Share this comment


    Link to comment
    Share on other sites

    I applaud the obvious effort and work expended to produce the comparisons.

     

    However, when at the end of the article, a similar comparison pronounced no delta between the USB cables tested (i.e., both are bit perfect), I question the efficacy, of the Software comparison -- i.e., its use to me; simply because in over 40-years, I have never purchased an audio component based on its measurements. While in the early years, I did pay attention to test results, presently, I haven't a clue how the components in my rig measure.

     

    I tend to listen to music, not measurements and have learned that the equipment that is pleasing to my ears and sensibilities, is also pleasing in my audio room.

     

    As flawed as it is, I use forums such as this to obtain other's subjective opinions. I've found that when enough samples are obtained (i.e., post's are read), a picture about a component begins to appear, or a conclusion about an individual's taste and conclusions begins to form. Using these findings, I have sought out and purchased the equipment discussed, with superb results.

     

    While I appreciate the measurement based comparison performed, and certainly don't contest the results, I'm at a loss as to how to personally use it for software or cable purchasing decisions.

    Share this comment


    Link to comment
    Share on other sites

    I applaud the obvious effort and work expended to produce the comparisons.

     

    However, when at the end of the article, a similar comparison pronounced no delta between the USB cables tested (i.e., both are bit perfect), I question the efficacy, of the Software comparison -- i.e., its use to me; simply because in over 40-years, I have never purchased an audio component based on its measurements. While in the early years, I did pay attention to test results, presently, I haven't a clue how the components in my rig measure.

     

    I tend to listen to music, not measurements and have learned that the equipment that is pleasing to my ears and sensibilities, is also pleasing in my audio room.

     

    As flawed as it is, I use forums such as this to obtain other's subjective opinions. I've found that when enough samples are obtained (i.e., post's are read), a picture about a component begins to appear, or a conclusion about an individual's taste and conclusions begins to form. Using these findings, I have sought out and purchased the equipment discussed, with superb results.

     

    While I appreciate the measurement based comparison performed, and certainly don't contest the results, I'm at a loss as to how to personally use it for software or cable purchasing decisions.

    Share this comment


    Link to comment
    Share on other sites

    Chris, Mitch,

     

    First I think this is totally premature since the MAC version is in beta.

     

    Second you guys cannot declare things being the same this way!! believe me I have already been bitten in the A*** for saying that

     

    So when I read this review it makes me think of how basic of testing was used here to reach a conclusion which is surely wrong.

     

    I think the MAC sounds better than the PC version when I use bootcamp in the same machine.

     

    Thanks,

    Gordon

     

    Why is the conclusion wrong? If You think one sounds better, that's ok.....but it's a subjective opinion where this is based on measurements. Since you have the gear on hand and the lab to perform the required testing, maybe you might impart some enlightenment on what to actually measure to show these readily audible differences in the digital realm?

    Share this comment


    Link to comment
    Share on other sites

    Hi Gordon - Every version of this article was run by Matt at JRiver before publication to make sure the company had no issues with Mitch's methods and use of an Alpha version of JRMC.

     

    We have to use the testing methodology available to us not the methodology we wish we're invented. I completely get what you're saying but this article simply provides a limited set of facts readers can use or set aside.

     

    The article is just a snippet of objective information not meant to be the final answer on anything.

    Share this comment


    Link to comment
    Share on other sites

    To put is simple, this test is complicated way to prove that both players can do bit-perfect playback...

     

    That's not going to tell much about how it sounds or how the analog output is going to look like.

     

    Usually just measuring DAC analog output with spectrum analyzer while playing back dithered 24-bit silence is enough to show differences between two source computers. I'm usually using 500 kHz measurement bandwidth and a 1M point averaged FFT for that and then zoom & pan the output plot to inspect the noise floor.

    Share this comment


    Link to comment
    Share on other sites

    Usually just measuring DAC analog output with spectrum analyzer while playing back dithered 24-bit silence is enough to show differences between two source computers. I'm usually using 500 kHz measurement bandwidth and a 1M point averaged FFT for that and then zoom & pan the output plot to inspect the noise floor.

     

    Putting this together with Gordon's impression that the Mac version of JRiver sounds better to him than the Windows version on Boot Camp on the same computer - Would you happen to have a Mac to test with, Miska? I have no idea whether it would be at all responsible for any audible difference, but I wonder whether your measurement described above would show any difference between the Mac hardware working with OS X drivers, and working with Windows drivers.

    Share this comment


    Link to comment
    Share on other sites

    Hi Chris- the conclusion I read in the article was that JRMC sounds exactly the same under MacOS and under Windows. Implied was anyone hearing any difference was just imagining it, with proof of that being the test results and explained by the reference material quoted.

     

    Many folks disagree with that conclusion, and the implications implied, including me. But I am sure just as many agree with them. This is a very tricky subject, with a history going back to "all amps that measure the same sound the same" type thinking.

     

    In the meantime, enjoy the same JRiver Media Center sound quality whether on PC or Mac.

     

    Paul

     

     

    Hi Gordon - Every version of this article was run by Matt at JRiver before publication to make sure the company had no issues with Mitch's methods and use of an Alpha version of JRMC.

     

    We have to use the testing methodology available to us not the methodology we wish we're invented. I completely get what you're saying but this article simply provides a limited set of facts readers can use or set aside.

     

    The article is just a snippet of objective information not meant to be the final answer on anything.

    Share this comment


    Link to comment
    Share on other sites

    Outstanding work. Thanks for this detailed contribution.

    Share this comment


    Link to comment
    Share on other sites

    But this thread wasn't about "how it sounds" it was about how they measure.

     

     

    Eloise

     

    The darn (polite enough?) title says "sound quality comparison"! Why are you making my comments out to be some sort of whacked left field perspective? I am not alone here. I am not questioning his efforts or techniques (in fact they are models for other articles), just asking the follow-up..how did they sound on the two platforms!!!

     

    I could take two Chevy Cleveland 500 engines, built in different shifts for different car mfgers, and put them on a test bench and measure that they have the same horsepower, torque, etc...but I'd think they would perform quite differently if dropped into say, a Corvette vs a Mack truck. That's all I'm asking.

    Share this comment


    Link to comment
    Share on other sites

    The darn (polite enough?) title says "sound quality comparison"! Why are you making my comments out to be some sort of whacked left field perspective? I am not alone here. I am not questioning his efforts or techniques (in fact they are models for other articles), just asking the follow-up..how did they sound on the two platforms!!!

     

    I could take two Chevy Cleveland 500 engines, built in different shifts for different car mfgers, and put them on a test bench and measure that they have the same horsepower, torque, etc...but I'd think they would perform quite differently if dropped into say, a Corvette vs a Mack truck. That's all I'm asking.

    Oh well... you can think that if you like...

     

    To me the article was clearly an objective view (i.e. based on measurements only); yet it's okay for the subjectives to come in and say thats rubbish. Yet if an objective view is posted on a subjective thread then the objective person is belittled and called disruptive (or similar). Perhaps to Mitchco (and I don't want to put words into his mouth) they do sound the same as the measurments would indicate...

     

    I thought the rules were meant to work both ways but obviously not!

     

    As for the stupid automotive analogies - they wouldn't measure the same with different shifts...

     

    Eloise

    Share this comment


    Link to comment
    Share on other sites

    I always enjoy this type of stuff. Engineers with all the latest test gear seem to think that just because you measure something you can either "prove" differences or not. Put a human being into the mix who sits and listens and ... bingo - where no difference is "measured" -- a difference is heard. And often double blind testing bears this out in a consistent manner. So, that also deflates the notion that if you cannot measure it, it simply doesn't exist. Science would die on the vine if that were true. Gotta' love it...the human ear/brain is capable of much more than merely trying to measure. Ah, and then there's "perfect sound forever!".

    Share this comment


    Link to comment
    Share on other sites

    As for the stupid automotive analogies - they wouldn't measure the same with different shifts...

     

    Eloise

     

    ??Huh? A shift is an 8 hour work schedule. The folks on second shift better darn well build the same spec'd engine or they'll lose their jobs. My point was that two of the same engine (i.e JRIVER) can be built to the same spec for different platforms (OSX or Windows) but once dropped into that platform will perform differently, possibly. I'm allowed to ask...really! I haven't broken any rules nor have I called anyone's (especially Mitch's) ideas stupid or even wrong.

     

    I'm not sure how I've deserved these belittling comments from you; I always thought you were reasonable.

    Share this comment


    Link to comment
    Share on other sites

    Something the subwoofer and speaker DIY and commercial developers do on other forums. They do GTGs or get togethers with different gear for listening evaluations and blind comparisons. Some are hosted by members in their homes, some are hosted by mfgrs or dealers while others are hosted by vendors. They're the highlight of the year for some folks, traveling sometimes hundreds of miles to share their creations. I think the CA fold would be a great place to do these by region, maybe hosted by local Brick and Mortar shops as a way to get living souls into their shops. A little wine or some local micro brews, great gear and a place for CA members to meet face to face as well as with dealers and maybe guest speakers/ designers local to the area.

     

    New York would be a great place to kick on of these off so if there's interest, I'll start a seperate thread in the General section for feelers.

    Share this comment


    Link to comment
    Share on other sites




    Guest
    This is now closed for further comments




×
×
  • Create New...