Jump to content

Ron Jones

  • Posts

    5
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie
  1. "How do you explain the fact that the latest iTunes S/W download when ripping ALAC sounds better on playback than previous versions of iTunes, which also had bit-perfect ALAC data?" Do you have data/evidence to suggest that this is indeed a fact rather than just a false perception on your account? It isn't beneficial to the users here to spread potentially misleading information, especially when users are attempting to make a product buying decision.
  2. "If a signal is bit perfect, and large numbers of qualified observers hear differences in the output, surely there is another significant variable." The most likely variable is that all listeners have the same bias influencing their perception. If you were to remove the possibility of bias by making the listeners blind to it, only then are you able to gain meaningful data. Otherwise, the observations can be appropriately discounted. We shouldn't even ever really discuss the idea of doing any kind of A/B testing and discussing the results if we aren't blind: it's completely pointless. "Perhaps what has been called “jitter” is a much more pervasive phenomena, having to do with the movement of acoustical data, than has been assumed." Jitter is fairly well understood. It can be measured, there is a well-defined threshold at which it begins to become an audible issue to the typical listener, and it can be (and is typically) suppressed by various techniques inside the audio devices we use on a daily basis. It, however, has nothing to do with which application is outputting the signal. Jitter comes into play with the rest of the reproduction chain. "I also want to say also that there is an anti-placebo effect." The so-called "nocebo" effect isn't a factor in A/B testing. If I'm ABXing an MP3 against its lossless counterpart, for example, I know I'm being subjected to a control: the lossless file. I also know I'm being subjected to the test file: the MP3. I'm merely allowing my ears to attempt to discern potential differences, and my brain doesn't know whether the MP3 is A or if it's B. In an ABX test, I'm attempting to match the hidden MP3 to another hidden MP3. I could just as easily do the opposite: attempt to discern which is the MP3 and instead match the two lossless files, if I can. A placebo or "nocebo" can only enter if I don't know if both A and B are actually identical. The nocebo effect comes into play when I'm aware of the possibility that I'm only being subjected to a control (the sugar pill) and not receiving the test (the actual medicine). My brain has the potential to discount the idea that I'm receiving a type of medicine that will alleviate my symptoms, and in so doing, may null or attempt to null the perceived effects of the placebo.
  3. "I would love to believe applications are all the same as long as the output is bit perfect, but my experience last weekend was so dramatic I cannot do it." Well, how do we define bit-perfect? If we assume that bit-perfect means: A) All individual samples are identical to the original source. B) All individual samples are replayed at the correct rate. Then we can assume that in a bit-perfect playback chain, timing differences are not relevant. If we assume that bit-perfect only indicates the samples are identical to that of the original source and played back at (theoretically) any speed, then time domain-related variances may possibly contribute to a different sonic signature -- depending upon numerous factors -- but still allow the output chain to be defined as being bit-perfect. If we look back at Alex's most recent tests involving outputting both FLAC and WAV over S/PDIF and recording the result, his result was that bit comparation was successful: the files were transmitted identically and received identically without any elaborate external clocking mechanisms. The input was simply clocked at input sample rate, which is typical of what will happen with any typical S/PDIF input, and re-quantized identically. We can therefore assume that any clock jitter present was not a barrier to proper signal reconstruction. In the real world, when we have two devices attempting to work at the exact same speed, there will be variances: this is inevitable. With digital audio, buffers help to eliminate the possibility of time-related errors, and signals are clocked with external, independent logic (often times with an oscillating crystal). As Axon's explained, if the data stream leaving one application is bit-perfect -- if it can be properly reconstructed at some other input -- the rest of the chain, which is independent of the application, takes over. We're always going to be talking about two, totally independent clock sources unless both the output and input devices are locked to the same clock. As such, timing differences would reveal themselves as bit errors at the end of the chain. So, the only clear way to test for potential output differences between two applications is to test them to make certain they output bit-perfect. A reasonable test would be to do as Alex did and simply out-and-in over S/PDIF and compare the bits. If the bits are identical (if they pass a bit comparator) you're done testing. You don't have to null the results against anything else but the original source file. Clock jitter cannot be a variable if the clocking source (which isn't in the application itself) does not change. Since there are no other variables, no further testing is required: you've verified accuracy down to the sample level. If the samples are there, and they all match up, the perception of an audible difference could then be appropriately attributed to the placebo effect. "In your scientific opinion is it possible for two CD Transports to sound different when the output is bit perfect?" As I define bit-perfect, no. "I think timing is critical and a major reason professional studios and audiophiles use external clocks." There are certainly valid reasons for using accurate master clocks in the studio, where multiple machines must sync with each other correctly and, secondarily, so cumulative timing errors are minimized. As for audiophiles, I don't really understand the reasoning given that digital audio is typically only passed over a single digital link once before being output (and because good DACs are typically quite jitter-resistant anyway). Jitter is most certainly a non-factor in most playback systems.
  4. @Chris "I really wish I had an answer for this one but I'm not that smart! I think it is closely related to Applications having a different sound even though they have bit perfect output." I think Axon's explained how this isn't realistic. If you reference Alex's testing at HA, you'll be able to duplicate his positive results regardless of the application playing back the files...if the application can spit out a bit-perfect stream. The bottom line: if it leaves the sound card's output bit-perfect, a software application cannot further interfere with the sonic signature. That's akin to saying that an amplifier can modify sonic signature of a signal being output by one's speakers (that the audio waves emanating from the speakers are in some way influencing the way the amp behaves). "After a couple hours of listening we played a song through MediaMonkey and a couple of us turned to each other immediately and complained about the sound quality." The explanation for this is perhaps remarkably simple. If you weren't blind to what application was being used to output, your preferences may have impacted your judgment. It's similar to listening to an MP3 and then listening to a FLAC and attempting to note subtle differences between the two. Such conclusions are non-authoritative (even personally) unless you are blind to potential influential preferences. If you have an established psychological bias against MP3, the potential that this bias will influence your perception of its playback is quite significant. That specific potential is totally unquantifiable, but it's a variable in a system of evaluation that needn't, and by all accounts shouldn't, exist. Even armed with the knowledge that both applications are outputting bit-perfect, a potential for bias is still present if you believe an additional layer of application-specific degradation somehow exists after that. @Tog "Why does this matter? Apple Lossless and flac are great ways to keep reasonable fidelity when storage was scarce or expensive. Now that you can get terrabytes of space quite cheaply surely vanilla PCM or aiff is the way to go anyway." For most? Tag support. While WAV and AIFF can be tagged, there is little to no consistency between different applications. In Windows, I'm totally unaware of any application (besides iTunes) which can read or write ID3 tags in an AIFF RIFF chunk, and I don't really know if iTunes for Windows can even do that much. All FLAC-supporting applications, on the other hand, are able to read and/or write Vorbis Comments (the tag scheme FLAC primarily uses) to use FLAC as an example. There are other usability benefits as well, not to mention space savings. While you can store practically any type of data in a RIFF chunk, it's only usable if applications are designed to read from, interpret and display that data as something you'll find meaningful. And, to correct your comment about FLAC and ALAC only retaining "reasonable fidelity", they actually retain original source fidelity. There is no compromise.
  5. @AV-OCD "Apparently his sound card adds dithering to the recorded file, so what he was left with was only the dithering noise added by the sound card, essentially confirming (I believe) that the two musical signals canceled each other out." Effectively. The noise was, for all intents and purposes, identical to the noise I observed when I phase-inversed two recordings of an identical WAV file. Based on the level of the noise and perceptible difference between the noise attained by phase-inverting the FLAC and WAV recordings (of which there seemed to be none) one can surmise, though without absolute certainty, that the FLAC and the WAV aren't audibly different. I hoped demonstrate that the resultant recordings would be bit perfect but failed due to software inadequacies which Alex intelligently worked around. I consider my testing only partially conclusive that the differences that may (or may not) exist between playing back a FLAC and playing back a WAV are inaudible. For the purposes of this discussion, that is, in my mind, a sufficient result. If I had managed to achieve dither-less recordings, I could have delivered God himself (but Alex was nice enough to have done that for me). @soooowhat "how the heck can something like the dithered noise be completely dismissed as possibly representing (or disguising) the difference signal that would occur if there were a difference between the phase inverted and non-inverted signals?" It can't. One can't assume the output of both the FLAC and the WAV were bit-perfect. As I said in my posts, the results of my test were only sufficient enough for me to confirm something not necessarily relating to "bit-perfectness". That was my pursuit, and I failed at achieving that, but my testing wasn't necessarily useless despite that. As for the resultant noise, the idea that it was "disguising" potential audible differences between playing back the FLAC and playing back the WAV isn't realistic based on the RMS level and the lack any significant peaks. The fact that the noise level was constant regardless of every other factor (even during periods in which my audio player wasn't playing anything, but I was recording anyway) is, as I believe, quite telling. Even if we assume in a hypothetical worst-case scenario that errors exist in the noise floor directly beneath the resultant inverse dither noise, at what point do they become potentially audible? When the playback system is driven beyond the threshold of pain? "My point being, given that there clearly was a difference signal - when the test hypothesized that a lack of difference signal would (i.e. was needed to) prove that the files were identical - this test was absolutely NOT valid for proving that there was no difference between the two files." This is 100% correct. My testing wasn't able to demonstrate that there weren't bit differences because of the dither which had crept in to the "test chain".
×
×
  • Create New...