Jump to content
  • entries
    5
  • comments
    12
  • views
    1977

Why I never got that perfect null in difference testing part one


esldude

I set out to see what the difference between expensive analog interconnects and cheap ones were. My method was simple in essence. I would send a signal to a DAC, send the analog output over an interconnect, record that digitally with an ADC. Then repeat with a different cable. Take the two digital files and subtract one from the other using software. There was a long thread posted in the general forum about my attempts. I learned some things along the way and eventually could get consistent results. But not perfection.

 

An absolutely perfect result would indicate no differences whatsoever. A perfect difference file would be an infinitely deep null with nothing left. While possible in the digital realm that actually isn't possible in the analog world. At a minimum you have thermal noise. So one step from total perfection in one sense. However if one differenced two files and nothing was left above the thermal noise floor that would be practical perfection. Mainly the thermal noise limit is defined by the impedance of your electronics. It is simply calculated like here at this online calculator:

 

http://www.sengpielaudio.com/calculator-noise.htm

 

Notice bandwidth and impedance are the main factors with temperature being a minor effect.

 

Along with thermal noise most of the time you run into flicker noise. Thermal noise is white in character while flicker noise is pink. Your real world noise floors typically will be a combination of those two. In my case, recording silence with a digital stream going I got this with my best equipment. You may notice a couple idle tones sticking up a bit. One is around 14.5 khz and a related one at 7250 hz. These are common in DACs for one reason or another. One FFT plot is a linear frequency view and the other FFT plot has log scaling on frequency:

 

recsilence1.png

 

recsilence2.png

 

So with noise in the picture the perfect obtainable null would be one where the difference between two files left only this noise floor. Even amplified such a sound file would only result in the whoosh of noise. Without amplification and lots of it you of course will hear nothing over your speakers or headphones.

 

But I never quite got that either. Even when running a piece of music or test tones through the same analog cable twice in a row changing nothing I never got two signals to null out with only this low level noise signature. Instead I got something like this:

Musicalimperfectnull.png

 

This particular one is some music recorded first over Audioquest Diamond X3 and then over the cheapest, junkiest interconnect I have. Then the two files subtracted from each other. It doesn't matter, they looked the same even recorded from the AQ twice in a row. The highest remaining signals at -117 dB are right above -100 db from the original signal level. Above 2 khz I did get nothing poking out of the noise floor, but not from 50 hz to 2 khz. A practically perfect result above 2 khz only.

 

Now play this signal over your speakers and even at max volume you hear nothing at all. Making me think even these imperfect results are close enough the cable is not significant. Still, amp this up by 60 db or more and you hear the noisy hiss and at times fleeting bits of the original music. I am attaching it as a zipped MP3 if you wish to hear it. Labeled "musical imperfect null". It has been amplified 60 dB without which you would hear only silence. This is about the gain of a phono stage plus the gain of a power amp together.

 

This however kept bugging me. What was keeping me from getting just a noise floor? Even if cables differed, using the same cable twice should have given nothing more than residual noise. Yet the music at some very low level left traces in the result. I think I have figured out why, and will go further into these issues in part two.

 

 

Find part two here:

http://www.computeraudiophile.com/blogs/esldude/why-i-never-got-perfect-null-difference-testing-part-two-257/

 

Just for reference for those wondering about the equipment used. Playback was with Win7 on a netbook via the Audiophilleo 2 USB/SPDIF converter. I also sometimes used a Laptop and desktop with no difference in the results. Playback software was Foobar 2000 in WASAPI mode which is a bit perfect playback software. Bit perfection was also confirmed by comparing a SPDIF recorded file to the original. The Audiophilleo 2 fed the input of a TACT RCS 2.0 room correction pre-amp set to bypass mode and unity gain. All done at 24 bit and usually 44.1 khz. The analog output of the TACT fed a TC Electronics Impact Twin. This firewire controlled device was clock locked to the TACT. The digital output after AD conversion in the Impact Twin was then fed into the digital input of an M-Audio 24/96 sound card. Audacity was the recording and analyzing software used.<p><a href="/monthly_2012_08/58cd9bc2a590d_Musicalimperfectnull60dbamp.mp3_zip.0cc82512fb229d497a73cd120a7b303c" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28125" src="/monthly_2012_08/58cd9bc2a590d_Musicalimperfectnull60dbamp.mp3_zip.0cc82512fb229d497a73cd120a7b303c" class="ipsImage ipsImage_thumbnailed" alt=""></a></p><p><a href="/monthly_2012_08/58cd9bcabc447_Musicalimperfectnull60dbamp.mp3_zip.e3dc7efff8c7ad528fe729a427518143" class="ipsAttachLink ipsAttachLink_image"><img data-fileid="28375" src="/monthly_2012_08/58cd9bcabc447_Musicalimperfectnull60dbamp.mp3_zip.e3dc7efff8c7ad528fe729a427518143" class="ipsImage ipsImage_thumbnailed" alt=""></a></p>

4 Comments


Recommended Comments

esldude,

 

If I understand correctly you began with a digital file, conducted a conversion to analog, used an anlog interconnect to send the resultant analog signal to an ADC, then conducted a conversion of that signal back to digital. You then inverted that resultant file and added it to the original.

 

There is a big problem with your methodology from a mathematics standpoint: Unless you have some way of ensuring you conduct your conversions in phase, then there is no way the two files will null (short of the trivial case where everything is zero as well as any all constant value case). You see, if you begin with even a simple pure sine tone, then sampling an analog signal of it can result in an infinite number of different digital files (not entirely true since we are working with finite word lengths and there are only so many possible combinations of words given their finite length but not important for this discussion). This is because changing the phase of a pure sine tone the smallest amount results in an entirely different digital file since relative to a given sampling point the amplitude of the wave has now changed because the wave shifted.

 

Inverting a copy of a sine wave that was sampled slightly out of phase from the original and then adding it back into the original results in a new sine wave having the same frequency but shifted in phase (half way between the two, which is easy to verify mentally) and of always lower amplitude. Using a music file, this should manifest itself as a copy of the music (more or less since we don't know how accurate the conversion devices are) but one of very low amplitude...no matter what especially if we are only a few samples off on timing. The phase shift (with the music file) has no impact to listening btw and is imperceptible since everything shifted together.

Link to comment

Correction: should read above "...a new sine wave having the same frequency but shifted in phase (half way between the two plus 90 deg...)," which is a cosine wave shifted in phase half way between the two.

 

Adding "phase" shifts the wave to the left or forward in time.

Link to comment

Yes, you are correct RobbieC.

 

I would point out one thing which is the samples are aligned to the single bit. They aren't off by even one sample period. And yes, this minor shifting between two files in time doesn't mean either of the files will sound different because of it. They wouldn't. But I do get you were talking larger shifts just to illustrate your point.

 

The point is comparing such files this phase shifting dirties up the depth of the null while not representing any actual difference in the waveforms of the signal itself.

 

As I said elsewhere, I feel a bit dim-witted at not having seen what was going on, just as you have explained it. But I value people pointing such things out. So thanks.

Link to comment

A worthy experiment, and I would be fascinated by the results between different cables. If I had to bet, it would be that you see no difference between cables, except for, depending on the output impedance of your source, extremely minor phase and amplitude effects at high frequencies strictly related to cable capacitance and not directly related to price. I suppose it is also conceivable that some cables might suffer more from susceptibility from hum from nearby mains-powered equipment. What you are really looking for, I suppose, is distortion in the form of spurious harmonics related to what? Micro-diode effects? Impurities in the conductor? etc. I look forward to seeing the results.

 

Here's my view on how to resolve the timing issue:

As has been pointed out above, you are effectively timing your recording randomly to within, perhaps, one sample period. What is stored in the file, however, has not been passed through a reconstruction filter such as a DAC would provide on playback. This turns the discrete samples into a continuous analogue waveform, and it 'resonates' at the sample rate such that it will actually overshoot and exceed the amplitude of the stored samples, producing smooth peaks to waveforms in between discrete samples. What you need to do, I think, is to software-upsample your recorded waveform to some arbitrarily high sample rate (using a software sinc filter), then line up the two waveforms (and optionally downsample again) and subtract them.

Link to comment



×
×
  • Create New...