View RSS Feed


JRiver vs JPLAY Test Results

Rating: 5 votes, 5.00 average.
Recommended reading first The reason is that I am not going to reiterate the baseline components and measurements of my test gear already covered in that post.

Here is a high level block diagram of my test setup:

On the left side is my HTPC with both JRiver MC 17 and JPLAY mini installed. The test FLAC file is the same Tom Petty and The Heartbreakers, Refugee at 24/96 that I have been using for my FLAC vs WAV tests.

JRiver is set up for bit perfect playback with no DSP, resampling, or anything else in the chain, as per my previous tests:

JPLAY mini is set up in Hibernate mode and the following parameters:

On the right hand side of the diagram, I am using Audio DiffMaker Audio DiffMaker for recording the analog waveforms off my Lynx L22 analog outputs of my playback HTPC. All sample rates for the tests are at 24/96.

Here is the differencing process used by Audio DiffMaker:

Audio DiffMaker comes with an excellent help file that is worth the time reading in order to get repeatable results. One tip is to ensure both recordings are within a second of each other.

As an aside, this software can be used to objectively evaluate anything in your audio playback that you have changed. Whether that be a SSD, power supply, DAC, interconnects, and of course music players.

My assertion is that if you are audibly hearing a difference when you change something in your audio system (ABX testing), the audio waveform must have changed, and if it has changed, it can be objectively measured. I find there is a direct correlation between what I hear and what I measure and vice versa. I want a balanced view between subjective and objective test results.

First, I used JRiver as the reference and I recorded about 40 seconds of TP’s Refugee onto my laptop using DiffMaker. Then I used JPLAY mini, in hibernate mode, and recorded 40 seconds again onto the laptop. I did this without touching anything on either the playback machine or the recording laptop aside from launching each music player separately.

Just to be clear what is going on, the music players are loading the FLAC file from my hard drive and performing a Digital to Analog conversion and then though the analog line output stage. I am going from balanced outs from the Lynx L22 to unbalanced ins on my Dell, through the ADC, being recorded by Audio DiffMaker.

Clicking on Extract in Audio DiffMaker to get the Difference produces this result:

As you can see, it is similar to when I compared FLAC vs WAV. What the result is saying is that the Difference signal between the two music players is at -90 dB. I repeated this process several times and obtained the same results.

You can listen to the Difference file yourself as it is attached to this post. PLEASE BE CAREFUL as you will need to turn up the volume (likely to max) to hear anything. I suggest first playing at a low level to ensure there are no loud artifacts while playing back and then increasing the volume.

As you can hear from yourself, a faint track of the music, that nulls itself out completely halfway through the track and slowly drifts back into being barely audible at the end.

According to the DiffMaker documentation, this is called sample rate drift and there is a checkbox in the settings to compensate for this drift.

“Any test in which the signal rate (such as clock speed for a digital source, or tape speed or turntable speed for an analog source) is not constant can result in a large and audible residual level in the Difference track. This is usually heard as a weak version of the Reference track that is present over only a portion of the Difference track, normally dropping into silence midway through the track, then becoming perceptible again toward the end. When severe, it can sound like a "flanging" effect in the high frequencies over the length of the track. For this reason, it is best to allow DiffMaker to compensate for sample rate drift. The default setting is to allow this compensation, with an accuracy level of "4".”

Of course this makes sense as I used a different computer to record on versus the playback computer and I did not have the two sample rate clocks locked together. The DiffMaker software recommends this approach, but I have no way of synching the sample rate clock on the Dell with my Lynx card.

Given that the Difference signal is -90 dB from the reference and that the noise level of my Dell sound card is -86 dB, we are at the limits of my test gear. A -90 dB signal is inaudible compared to the reference signal level.

I am not going to reiterate my subjective listening test approach as I covered it off in my FLAC vs WAV post.

In conclusion, using my ears and measurement software, on my system, I cannot hear or (significantly) measure any difference between JRiver and JPLAY mini (in hibernate mode).

April 2, 2013 Updated testing of JRiver vs JPLAY, including JPLAY ASIO drivers for JRiver and Foobar plus comparing Beach and River JPLAY engines. Results = bit-perfect.

June 13, 2013 Archimago's Musings: MEASUREMENTS: Part II: Bit-Perfect Audiophile Music Players - JPLAY (Windows). "Bottom line: With a reasonably standard set-up as described, using a current-generation (2013) asynchronous USB DAC, there appears to be no benefit with the use of JPLAY over any of the standard bit-perfect Windows players tested previously in terms of measured sonic output. Nor could I say that subjectively I heard a difference through the headphones." Good job Archimago!

Interested in what is audible relative to bit-perfect? Try Fun With Digital Audio - Bit Perfect Audibility Testing. For jitter, try Cranesong's jitter test.

Happy listening!
Attached Thumbnails Attached Files

Updated 06-15-2013 at 07:09 PM by mitchco

Personal Blogs


Page 1 of 2 12 LastLast
  1. joelha's Avatar
    Absolutely tremendous report, Mitchco.

    I for one had never been able to hear a difference between WAV and FLAC and between JRMC and JPlay.

    Yet, I had read so many posts stating that others could detect noticeable differences.

    Thanks for bringing some badly needed objectivity into a hobby where emotions seem, at least some of the time, to so heavily influence listening results.

  2. Julf's Avatar
    Have to agree - great work! Why speculate, when actual facts are reasonably easy to find out? OK, the truth is maybe less entertaining than the wild claims of the voodoo fringe of esoteric audiophilia, but sometimes we just have to deal with plain, boring reality.
  3. jriver's Avatar
    Great detective work, mitcho. You deserve the Smoking Gun Award. Well done!
  4. blaine78's Avatar
    great effort.

    JPLAY, to me, does sound marginally more focused with a bit more presence and tighter.

    WAV vs FLAC, WAV always wins out for me being more focused, tighter and alive.
  5. crisnee's Avatar
    An interesting test might be to test Jplay vs. Jplay. Jplay in its most convenient mode and Jplay in hibernate mode. If there's no difference there, that might explain why there's no difference between it and JRiver, or it might mean that hibernate does nothing for SQ, or it might mean that your pc isn't "dirty enough" to need "cleaning" by Jplay's hibernate mode.

    Similarly, it might be worthwhile to run your test on several rather different pcs. The reason would be to determine whether you just happened to run an ideal pc for JRiver, or if a pc's electrical noise factor and wireless keyboard/mouse/network and whatever else is said to cause SQ problems, can be overcome by Jplay's hibernate mode, but not by JRiver.

  6. Mark Powell's Avatar
    People have to pretend they hear a difference. They may even believe it. After all, they paid twice as much for JPlay as they did for JRiver.
  7. blaine78's Avatar
    LOL. no pretending. some just do hear it. trust your ears.

    PS. I didn't pay for JPLAY I paid for JRiver because of the functionality.

    So in reality, i should be favouring JRiver because i need to justify paying for it?
  8. crisnee's Avatar
    Mark, I don't think it's that simple.

    I happen to be somewhat aware of the beginnings of Jplay and a bit of its development. Initially it was just a very simple free player (if you could call it a player) a basic command line argument or two, that played a file, or a few. The person who developed it, as far as I could tell had no thought of making it into a saleable thing. He was just after getting the best sound possible from a computer. He and others thought they had something that sounded significantly better than standard programs like JRiver, Foobar etc.

    So, my point. The developer and the people who tested and demoed his little player gadget just thought it sounded particularly good, no money involved. How it became a commercial endeavor, that's another story. But a number of people loved it before they were driven to love it (voodoo set in) because they paid a lot for it, or had anything including pride invested in it.

    I'm not making an argument for its SQ; I did try it at the time, but wasn't in a position to make a definite judgement, I'm just saying. Your comment is just a bit too easy.

  9. Mark Powell's Avatar
    Proves there is no difference. It really does. I told him that his original Flac/WAV results would not convince the believers in voodoo. This won't either.

    I do believe in some things. Different analog interface cables do change the sound. So do speaker cables. Whether changing a cable makes the sound 'better' is a separate issue.

    People are of course free to believe whatever they wish to believe.
  10. PeterSt's Avatar
    Proves there is no difference. It really does.

    Because ?
  11. Mark Powell's Avatar
    But maybe not this time. I have two measuring instruments. They are called 'engineers rules'. They measure the length of an object. This is an engineers rule too. I believe all three of them :)
  12. PeterSt's Avatar
    Too bad that nobody comes with the idea that the test is flawed. I really am floored of what I read for responses. Started with the WAV-FLAC thing.

    Praise here praise there (which Mitch really deserves btw).

    I can't imagine that you guys don't see it.

    But ok. Who am I.
  13. Mark Powell's Avatar
    Where is it flawed?
  14. PeterSt's Avatar
    I just can't imagine that I (or we) should even be starting to talk about this. Btw, I rather let it be like this. It may let feel you all more comfortable. But then of course I too don't sell my software anymore. Haha.

    FYI: I developed analysis software for this myself, so let's say I know what I'm talking about. If you cough, I see the difference. Ok ?

    What happens here is maybe best explained by an example :

    We are going to test two CD transports. DAC behind it is the same, amps behind them is the same - all is. There's a preamp connected in between the DACs (of same brand) and the amps and the inputs can be switched on the fly.

    100 people listen to it - blind or not.

    The transports are both of well reputation, and all a stransport does is reading the data from the CD and passes it on to the DAC over SPDIF. It is believed that they CAN NOT sound different, because all they do is reading the bits from the CD. There's error checking in both transports, and both don't show errors. Let's say that even AccurateRip is inside of them.

    The test leader is controlling the switch, and it is clearly visible when he switches between the two transports. It is no secret that he does and when he does it.

    The audience is to tell whether they perceive a difference between the two transports, which clearly can not be. Bits is bits you know ...

    Without exception all of the 100 attenders do hear a clear difference. It is not about whether the one is sounding better over the other - just that there might be a difference. The suggestion of the unknowledged is that bits do matter, no matter they are the same.

    After the event the test leader is puzzled. Anyway, he makes his report and he also unveils a big secret : one thing was different ...

    He wasn't able to find two pair of interlinks from the DACs to the pre of the same length. One pair was 2ft, the other par 5ft. Still his conclusion is :

    It is clear that the difference people perceived was because of this different lenght of interlinks. Bits are bits.

    That's where I am floored.

    As the only one. Happens more often ...
  15. Audio_ELF's Avatar
    To some people the fact that you measured no differences just shows you measured things wrong.

  16. Mark Powell's Avatar
    Though valid, shows a difference. His didn't
  17. PeterSt's Avatar
    His didn't

    Huh ??

    Then I must be blind.
  18. Mark Powell's Avatar
    They say he, she, or it is all powerful. Next breath they say he, she, or it didn't know.
  19. screenmusic's Avatar
    This is not a matter of believing its a matter of hearing, not even understanding but feeling.

    But in the otherhand, here, in this fine tune obsession that we all are, comes in individual perception... and all depends in this. And, as you can imagine, perception varies in each person as much as the lines of the faces. If you need to prove that a player is different from the other by technicals means, then you cannot trust your perception you are missing the point.
  20. abok's Avatar
    Hats off to a great amount of work. My problem with what is easily declared as "truth" is, that my personal perception of what is happening in my listening room is very much different. I like JRiver for the looks, the ease of use etc. So i planned to buy, but i stumbled upon stealth audio player. Which does not cost a penny, just (another) act of love. Given the sonic result, the guy pretty much knows what he is doing. For me, improved clarity and even more control when the music is getting more complex, are a sign of clearly doing something right without having to discuss personal taste.

    On the other hand, JRiver was some sonic improvement over my foobar setup.

    This leaves me with foobar definitely sounding worst, though it is for a lot of reasons, the player I would like to be on par with the best.

    Not a long shot we wouldn't see any differences between foobar, JRiver and stealth in their bitperfect modes, I guess. Means they all sound exactly (EXACTLY) the same.

    Now, IF this is the/your case, I'm totally out of the game. When I use all of the three programs for what they are meant for, my reality IS different from yours, without having to spend a dime.

    Sidenote: IMHO, Peter's XXHighEnd and Josef's JPLAY don't sound identical with foobar or JRiver or stealth, too.

    Funny thing: Because you are (yet?) not able to tell the difference (in numbers) between the work of Matt or Josef (or maybe Peter?), you claim that there definitely is no audible one. That is what *I* call strong belief. And as long as anyone / the legions that are jumping by ("Oh my God! The truth! It is revealed!") is trying to install these findings as valid truth for really everybody, I wish you had not done all that work...


  21. Mark Powell's Avatar
    Unlike the 'sound' of cables, this is actually measureable.
  22. Audio_ELF's Avatar
    Actually surely if this testing is valid for software, you could use the same recording technique with cables.

  23. Mark Powell's Avatar
    Unfortunately your comments are about JRiver, Stealth, and Foobar. Maybe there is a difference between those. But that's just a red herring. The test was not between those.
  24. Julf's Avatar
    "[i]But in the otherhand, here, in this fine tune obsession that we all are, comes in individual perception...[/i]"

    Absolutely. But what is your goal - achieving a setup that sounds pleasing to your ears, or achieving the most accurate reproduction of the original recording? They are two very different goals.

    Nothing wrong with the former, unless you then generalize the results to others, stating "equipment A has much better sound quality than equipment B" instead of saying "to my ears (and brain), A sounds more pleasing than B".
  25. Julf's Avatar
    "[i]To some people the fact that you measured no differences just shows you measured things wrong.[/i]"

    Indeed. Mitchco's "research" is not research. This is research:
  26. abok's Avatar
    herring-point taken. I didn't make myself clear enough either. With JPLAY mini [perceived difference to JRiver bigger] and a measuring setup which says it all is just the same, i just assumed that you/we might not detect any differences between stealth and JRiver either [perceived difference is smaller]. So at least this is an assumption.

    In *my* real life i just can't set them up in a way the music sounds the same. That's why.

    Anyway, stealth player would be a good point to start if anyone were in for more on this case because it just doesn't belong to anyones camp, is free.


  27. Mark Powell's Avatar
    That many people don't like reality. It's the same here. Eloise has pointed it out also. His tests on both subjects are impeccable.

    Sadly, pointless, though interesting. We already knew, at least on Flac/WAV, the 'believers' will not change their views.

    Doesn't matter. I smiled at a Jehovah's Witness the other day. Doesn't mean that I agree with him.
  28. dallasjustice's Avatar
    Thanks for doing this. I wonder whether it would be worthwhile to do the same test comparing Linux against Windows on a dual boot computer?
  29. losingmyreligion's Avatar
    "And as long as anyone / the legions that are jumping by ("Oh my God! The truth! It is revealed!") is trying to install these findings as valid truth for really everybody..."

    I believe this test performed a valid function for the forum, in that it serves as a marker for those people who would take a single data point and as you say "install these findings as valid truth for really everybody".

    Seen in that light, it's grand entertainment. :)
  30. 4est's Avatar
    But is it a comedy, a tragedy or both?
  31. Mark Powell's Avatar
    A truth is a truth. The number of people don't matter. Facts even existed before people. And of course a truth is 'valid' by definition. Same as 'very unique, and 'past' history, both of which make people laugh.

    That said, It *seems* perfectly valid to me and I can't find any holes in it. That of course does not mean that it actually *is* true.
  32. mitchco's Avatar
    The reason I wrote the FLAC vs WAV and this post was to show that my computer audio playback system is working correctly.

    FLAC and WAV are lossless audio file formats, they are bit for bit identical.

    Bit-perfect playback: "in audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file." and "Poor device drivers often alter the data, resulting in it making not bit-perfect. This is especially true for device drivers used in consumer-grade sound cards."

    If you are hearing a difference between any lossless audio file formats and/or bit perfect music players, then there is something not working correctly with your computer audio playback system (i.e. it is not bit-prefect playback).

    The "free" measurement tools I presented can assist in troubleshooting what might be the issue(s).

    On Windows, you can use:

    DPC Latency Checker: DPC Latency Checker is a Windows tool that analyses the capabilities of a computer system to handle real-time data streams properly. It may help to find the cause for interruptions in real-time audio and video streams, also known as drop-outs.

    To me, DPC Latency Checker is a critical tool because in my experience, a high latency computer is the number one reason where things go wrong. If you look at the latency on my computer, it is 10X below the accepted threshold. I designed my computer for this to ensure I never have any latency issues.

    RightMark Audio Analyzer: Excellent tool to measure the electrical noise present in your computer audio system. You can also check frequency response, distortion, etc., but it is the noise measurement is what we are mostly interested in.

    Pro-tip, have a look at the size of the power supply I use in my computer. Again, in my experience, the more power, the less load = lower electrical noise. In addition, the Lynx L22 sound card has good noise rejection and a very low noise floor (-107 dB measured on my rig with DAC + ADC in external loopback mode).

    Audio DIffMaker: Audio DiffMaker is a freeware tool set intended to help determine the absolute difference between two audio recordings, while neglecting differences due to level difference, time synchronization, or simple linear frequency responses.

    I purposely included the DAC and analog line output amplifier in my tests to show that a) the Digital to Analog conversion and analog line output amplifier is not altering the bit-perfect waveform in anyway and b) the electrical noise of my playback computer is so low that I am into the noise floor on the measurement computer.

    Meaning that my computer audio system is operating as it should be. Therefore, I should not hear any difference between any lossless audio file formats or bit-perfect music players.

    The tools are free and the tests are simple. I encourage folks to try these tools out to ensure you are getting the best performance out of your computer audio playback system.
    Updated 05-09-2012 at 11:31 AM by mitchco
  33. 4est's Avatar
    Interesting how qualified individuals have been working on this for some time, and suddenly it is "solved" err disreputed.
  34. Mark Powell's Avatar
    "If you are hearing a difference between any lossless audio file formats and/or bit perfect music players, then there is something not working correctly with your computer audio playback system (i.e. it is not bit-prefect playback"

    So all lossless formats and all bit perfect players should sound the same if *your* computer is working properly.

    So hopefully no more of this gibberish about 'truth for everybody' and a simple explanation "your computer is broken" if they hear differences.

    Good Good Good.

    Unfortunately it will not change anyone's opinion, but at least we know that such opinions can be safely ignored.

    BTW. The current JRiver has a 'bit perfect' light. so you know that both it, and your computer, are working correctly.

    So those who care about sound quality can concentrate on improving (latency, etc) their computers.
  35. Mark Powell's Avatar
    What's a 'qualified' individual? And who says he isn't?

    We all knew what would happen, no doubt including the OP. And it is whole lot more constructive than those, and there some (not you) who *never* voice an opinion, but prefer to sit on the sidelines and criticise those who do, such criticism often based on their undefined 'long experience'.
  36. abok's Avatar
    Brings me right back to what screenmusic said earlier.

    My point of view is about perception. Which is subjective and can be coloured or wrong. But, as long as you really need software helpers which give you numbers or curves to judge, if this player is the player you'll learn to love John Coltrane with, you miss my point.

    And as long as I really need to listen to music for hours to evaluate, I might be missing yours.

    To each his own.


  37. Mark Powell's Avatar
    Press the download button and jump back!

    My run-of-mill laptop is fine. With JRiver, a flight simulator (not the Microsoft one), a video on Internet explorer it's all fine. Task manager shows max 80% CPU with that all going, only 1 or 2% with just JRiver, latency says OK all the time.

    Had a bit of a problem flying the airplane and watching the other dials!
  38. bstcyr's Avatar
    If there is a disagreement between measurement and perception there are several possibilities. The perception of some individuals is different than most of us and includes something that can't be measured - some folks insist on seeing ghosts, talking to dead people etc. (one of them used to be a Prime Minister of Canada -MacKenzie King). That doesn't make their perception wrong - just not relevant for most of us. Another possibility is that the measurement is wrong or incomplete. Back in the 1950's when transistors first came out engineers claimed transistor amps to be nearly perfect. Many listeners disagreed and it was discovered that since they were testing the new transistor amps the same as they tested tube amps they were missing some distortions. Once they learned how to test and measure for these distortions the race was on to improve transistor circuits to get rid of those distortions. I'm not saying there is anything missed in these measurements, only that it is possible. We know that the human perception system is an amazing thing. Most people can drink wine and say I like it or I don't like it. Some people can drink wine and identify the types of grapes used and sometimes even the year of those grapes. I have taken part in, and run blind listening test (I know some people disagree with blind listening as a valid test) with all kinds of equipment (cd vs records, cables, speakers, amplifiers,....) I'm most likely to be cynical and agree with the measurements if the blind listening tests verify the measurements. When they don't then scientifically it means we need to try and find reasons why. In my own tests I would tend to agree that I don't hear these differences. My ears are pretty good, I play and build guitars - have for 40 years, have been all over the world studying guitar building. I have, I believe, like a wine taster, practiced listening for small subtleties in the sound of instruments, my sound system consists of about $20k of well regarded 2 channel cd playing equipment (although experimenting with computer based). I would be happy to have anyone who thinks they percieve a difference verify that. Every time I thought I could hear a difference that was not measureable and I set up a carefull ABX test I found that there was a reason or that I couldn't reliably hear the difference after all. The real challenge here I think is to those who feel that this test does not explain what they are hearing is to find a way to verify that. Mitchco has done an excellent job of describing and supporting his position, Those of you who disagree are free to do so of course, but please take he stance that you have a different opinion which is at this point only opinion.
  39. screenmusic's Avatar
    If you are a wine taster then you should know that perception, although different in each person, it is very different in each subject, some will percieve more, others will percieve less and as a luthier you should know that every detail counts. If you build a solid adirondack spruce top acoustic guitar you know that the sound is different from a solid stika spruce top or the warmer tone of the indian rosewood against the brighter mahogany sides. If you use a cow bone on the bridge it will sound edgier and a little more sterile than a walrus fossilized bone, with richer overtones and sweeter treble.

    Then everything resides in perception. I love sound, as a musician and producer I am thinking all the time in sound, always looking in alternatives to get a more realistic and natural sound in my recordings. I did many blind tests before, if someone turns the direction of an interconect cable then I will notice that. If you work on your perception it is incredibly how far it can reach, it is not an illusion it is real fact. Can you measure beauty?
  40. crisnee's Avatar
    Let me try this again, but in a different manner.

    Mind experiment: Consider that computers, the grid, and the relevant audio components of a given system encompass almost infinite variables re SQ. Now consider JRiver and JPlay. Put JRiver in your system, call it system J. Start over, and put JPlay in your system, call it JP.

    Compare system J to JP in mind (not listening). You've got two very different systems. Now unless everyone has the perfect computer for audio (whatever that is) it's likely that J and JP will sound different.

    JP and other niche "highend" players attempt to mitigate the "flaws" in computers that contribute to less than "perfect" SQ. (That's how I see it anyway, I could be wrong as I know nothing real of any of these players). So in "perfect for audio" computers perhaps JRiver (which I don't think believes in mitigating computer audio flaws, to a significant degree anyway) and JPlay would sound the same.

    However, if JPlay successfully mitigates flaws in "imperfect" computers it's likely to sound better or different from JRiver because, I think most of agree, computers can be and often are flawed in many ways when it comes to audio.

    This would certainly explain why people hear differences when using different players.

    Notice, by the way, that bit perfect is irrelevant here and that Mitchco's tests prove nothing for any system but the one he tested on and those that are exactly the same, possibly down to power source and time of day. This may sound like nitpicking, but a lot of the SQ differences seem to be mighty small nits too.

  41. Mark Powell's Avatar
    Of course not, it is subjective, I might like a different guitar from you.

    But the whole point of this test is that his results *are* measureable. they are *not* subjective, and on his particular computer they are identical. so no one will hear any difference. One might even say that the more perceptive person should be more sure of the 'no difference' than the less perceptive person.
  42. mitchco's Avatar

    "Bit-perfect audio/video does not perform any digital signal processing (DSP) such as channel matrixing, filters and equalizing and does not do any resampling or sample rate conversion (such as upsampling or downsampling). In audio this means that the digital output from the computer sound card is the same as the digital output from the stored audio file. Unaltered passthrough.

    The data stream (audio/video) will remain pure and untouched and be fed directly without altering it.

    Bit-perfect audio is often desired by audiophiles."

    "Lossless compression formats enable the original uncompressed data to be recreated exactly"

    Digital Audio has a series of technical specifications, coded file formats, application programming interfaces, etc., that all manufacturers of digital audio players, workstations, etc., must adhere too. This is what makes bit-perfect audio possible.

    I totally agree, different computers have varying degrees of latency, electrical noise, performance issues, you name it. The list of free tools provided is a way to troubleshoot and verify computer audio performance.
    Updated 05-09-2012 at 11:32 AM by mitchco
  43. Audio_ELF's Avatar
    One think Mich may want to test is running the same comparison using two deliberately non bit-perfect outputs. See if there is something measurable shown using the comparison. Perhaps using the software's EQ in a noticeable but subtle way...

  44. johniboy's Avatar
    Very nice experiment. There is just one caveat I see. The software audio-diffmaker MAY in fact dilute some the difference between tracks which are only present in the time domain.

    Here is their paper:

    This sentence that me nervous:

    "In the time domain, a digitally recorded track can be easily shifted only by whole samples. But if it is transformed into the frequency domain, delays can be easily varied by any amount by adjusting the phase of each component an increment proportional to its frequency. For this reason, trial and error iteration to optimize delay compensation to fine values is done in the frequency domain."

    If I understand this correctly, the software iterates through different parameters to adjust and compensate for shifts in the time domain. This would inevitably discard all of the differences that may be present merely by overfitting. I may be wrong but a more conservative alignment approach may lead to results which better match reality. Probably not an easy task.

    Just some thoughts.
  45. 1audio's Avatar
    I believe the results of the tests are completely valid within the limits of the test. There are a number of external aspects that limit the validity of the tests. Its point of measurement is the line level signal at the output of the DAC/preamp. Ideally you would want the acoustic signal, but that is very limiting. The next best is the speaker terminals.

    You also need to be sure that adding the extra computer isn't somehow affecting the measurement (called observer effect) in a way that affects and limits the measurements. This and a lot of other things are why really good research is hard work.

    First the results are only valid for the conditions of the test. the tests need to be repeated on a number of different systems to get any idea if the results are consistent or repeatable on other systems (I suspect they will be mostly but keep in mind that the Lynx card is very good and may to a better job of suppressing the differences in the source data timing). Then we really need to test for sensitivity. This is difficult for subjective stuff. There is some existing lit that can point to some things to test for.

    I think this is important and I'll try to contribute when I get a chance, but I would not say these results are conclusive, just very indicative.
  46. Windows X's Avatar
    It all comes down to this "Hearing is believing". I bet he'll get similar result if he compare foobar or even WMP to J River, Difference between DirectSound/ASIO/WASAPI/KS, etc.

    It's just hobby. Why so serious?
  47. Mark Powell's Avatar
    On ripped CDs I cannot tell the slightest difference between a maximally tweaked JRiver 17 and maximally tweaked Windows and WMP left 'as is', with Windows also left 'as is'.

    Obviously my several tens of thousand dollar 'system' is total crap. My ears are fine as they are large and stick out.

    It a hobby. And the music entertains us too :)
  48. crisnee's Avatar
    Ok, let me try this from yet another angle.

    If so-called audiophile software players deliver a particular unaltered bit perfect file to the dac and they are playing on "perfect for audio" computers, said players should all sound the same. I'm assuming that if they don't, they have a problem and they're not ready for prime time. If we can't agree on that, well then we're talking beyond current measurement capabilities and we need not discuss it further. If we can agree on that, we've agreed on something of very little value, little has been revealed. And that's all that mitchco's test(s) reveal.

    I think that players differ in how they deal with real world "imperfect computers" and that's why they may sound different. They need to be tested with that in mind; Then the tests will have value, and tell us something.

    Mitchco acknowledges as much by having us test our machines. But we can't test for everything, and even if we could, then what? Much simpler if a software player takes care of that for us.

    Players also have added features (various forms of dsp, upsampling) which set them apart from each other and which may make them sound better, but those features should all be disabled for testing purposes.

  49. Mark Powell's Avatar
    is 'accurate reproduction' Presumably of the source you have available.

    So ALL hi fi systems should sound alike. They don't, of course. But the more you pay the more they should sound the same. If not, there is no point in paying more that a few hundred dollars for the lot. High Fidelity does not mean 'I like the sound my expensive eqipment makes'.
  50. crisnee's Avatar
    It's hi fi, not perfect fi. If it were pe fi, you'd have a case.... but what's it got to do with anything in this thread?

Page 1 of 2 12 LastLast