Jump to content
IGNORED

Amir misses the point again: Looks for the music in the noise.


Recommended Posts

1 minute ago, mmerrill99 said:

I don't see you testing anything - just denying every point I make without any backup logic

FFTs for example - I made some points that we typically see as shortcoming in FFT testing. You reply that it's not an FFT issue.

 

It's fine being a devils' advocate but when all you bring to the debate is denying everything  that is said, I'm not really bothered

 

I'll ask you again & let see if you can get on a more positive side of this - do you think Amir's FFT shows that the iso Regen does nothing?

 

If not then how would you go about testing it?

 

What about testing the conjecture of current draw causing noise - how would you test this?

 

Your argument that the distribution of bits in a signal is random is ill informed & handwaving, as far as I'm concerned

 

Ok, let's focus on testing. Amir's test showed no noise related to USB signal in his FFT analysis. You made the point that FFT may be faulty. You said the noise is dynamically changing, and therefore cannot be captured by FFT. I disagree. All signal within a specified interval is captured by FFT, including transient spikes, as long as they are within 1/2 sampling frequency.

 

How Amir's FFT is configured, whether it's set to do averaging, etc., I don't really know. What I do know, is that it's not hard to run an FFT analysis of a WAV file captured from the output of a DAC. With a hi-res ADC and proper settings, it should be easy enough to detect noise spikes in the output of such an FFT, even if they change in frequency. If you insist that the input signal should be complex, then we can feed 100 or more sinewaves at different frequencies into the DAC as the test signal.

 

Regardless of how noise is generated, it must affect analog output to be relevant, and that part should be measurable.

 

Link to comment
4 minutes ago, mmerrill99 said:

OK, I'll ask you again - do you think Amir's FFT shows that the iso regen does nothing?

 

Answering this question will basically cut through all the crap so please provide this answer

 

Amir's FFT shows no effect of ISO Regen on noise levels (except when using a noisy 5v source with a sensitive DAC). If you are correct, then Amir didn't measure all there is to measure, so let's come up with a better way to detect this noise.

 

Link to comment
2 minutes ago, pkane2001 said:

 

Amir's FFT shows no effect of ISO Regen on noise levels (except when using a noisy 5v source with a sensitive DAC). If you are correct, then Amir didn't measure all there is to measure, so let's come up with a better way to detect this noise.

 

I'm asking you for the third time if you think Amir's FFT shows that the iso Regen does nothing?

Your avoidance in answering this simple question is telling

Link to comment
7 minutes ago, mmerrill99 said:

I'm asking you for the third time if you think Amir's FFT shows that the iso Regen does nothing?

Your avoidance in answering this simple question is telling

 

I'm not sure what you are asking. I said it cleans up 5v supply. I said that beyond that the FFT shows no other effects. Is this really not clear?

Link to comment
3 minutes ago, The Computer Audiophile said:

It's strange, he said the Regen does nothing, accept when it does something. 

Yes, but it's argued that any DAC which uses the 5V USB supply for power & doesn't sufficiently clean & filter it is badly designed - I would agree with this.

 

But in his first FFT which showed no such improvement - what I'm asking Paul is doe sit "prove" the iso regen does nothing?

Link to comment
3 minutes ago, pkane2001 said:

 

I'm not sure what you are asking. I said it cleans up 5v supply. I said that beyond that the FFT shows no other effects. Is this really not clear?

Sorry, I should have said - other than the 5V clean up, do you believe Amir's FFT 'proves' that the iso regen does nothing.

Link to comment
5 minutes ago, mmerrill99 said:

Sorry, I should have said - other than the 5V clean up, do you believe Amir's FFT 'proves' that the iso regen does nothing.

 

Are you perhaps a lawyer? I repeat, his FFT shows no effect. I don't take a single measurement. made using unknown settings, as proof of anything, just a data point. This is why I'm attempting to talk to you about further testing, but instead... I get this interrogation??

Link to comment
1 hour ago, The Computer Audiophile said:

 

Some products require different conditions and as a result perform awesome. Spectral amps require MIT cables. The end result is terrific. 

Then Spectral is an example of poor engineering at the expensive level.

Well engineered products don't need EQ networks to correct frequency response errors.

Link to comment
19 minutes ago, pkane2001 said:

 

Are you perhaps a lawyer? I repeat, his FFT shows no effect. I don't take a single measurement. made using unknown settings, as proof of anything, just a data point. This is why I'm attempting to talk to you about further testing, but instead... I get this interrogation??

It shows me your intent & whether you are disinegenuously claiming "so let's come up with a better way to detect this noise." or whether this is just a ploy & what you are really doing is objecting to everything without any logic,  just lots of handwaving

 

You already stated that " What I do know, is that it's not hard to run an FFT analysis of a WAV file captured from the output of a DAC. With a hi-res ADC and proper settings, it should be easy enough to detect noise spikes in the output of such an FFT, even if they change in frequency. "

 

So, either you think Amir doesn't know how to use his Audio Precision signal analyzer properly & his FFT is flawed or you believe his FFT shows all there is & the iso regen does nothing other than 5V cleansing?

 

I believe that no matter what is said you will object because you believe that this FFT is correct & you are trying to argue for this while pretending to "come up with a better way to detect this noise"

 

Not interested in your games when you are being duplicitous - genuine inquisitiveness & an open-minded approach to "detect this noise" I would be interested in discussing but you show no inclination  towards this   

Link to comment
8 minutes ago, mmerrill99 said:

It shows me your intent & whether you are disinegenuously claiming "so let's come up with a better way to detect this noise." or whether this is just a ploy & what you really are really doing is objecting to everything without any logic, just lots of handwaving

 

You already stated that " What I do know, is that it's not hard to run an FFT analysis of a WAV file captured from the output of a DAC. With a hi-res ADC and proper settings, it should be easy enough to detect noise spikes in the output of such an FFT, even if they change in frequency. "

 

So, either you think Amir doesn't know how to use his Audio Precision signal analyzer properly & his FFT is flawed or you believe his FFT shows all there is & the iso regen does nothing other than 5V cleansing?

 

I believe that no matter what is said you will object because you believe that this FFT is correct & you are trying to argue for this while pretending to "come up with a better way to detect this noise"

 

Not interested in your games when you are being duplicitous - genuine inquisitiveness & an open-minded approach to "detect this noise" I would be interested in discussing but you show no inclination  towards this   

 

You're wrong. I know a bit about FFTs, been using them for over 15 years in commercial software I developed. But, I don't think you're interested in hearing any of this.

 

Great talk... not! O.o  Have fun with your conjectures.

Link to comment
Just now, Jud said:

 

I’d urge a little caution before leaping to the conclusion Demian Martin and Keith Johnson didn’t know what they were doing.

It's possible they knew exactly what they were doing and intentionally designed a substandard device.

Link to comment

Just to explain multitone test signals a bit. We are all probably familiar with two tone tests - it shows the IMD - intermodulation distortion products - in other words an ideal DAC would only have the two tones on  the analog output but all dacs show other spurious signals along with the two tones.

 

Multi-tone test signal goes further & uses maybe 30 tones or more arranged so that the IMD products fall between the test tone spikes & aren't masked by the test tones themselves.


 

A source of multi-tone measurements of DACs that I posted before is here

So here's an example of a 10 tone measurement on a Hifiman

image.thumb.png.17693e634e16a15f6795bf7160c96403.png

 

What you see in this plot is a lot of lower level signal (inter tone products) created between the tone spikes - beginning to form a 'grass'

Now if this was a music signal what would happen is that there would be many more signal tones & many more inter tone products between the main frequency spikes forming a much more dense 'grass'. As the music signal dynamically changed this grass would fluctuate in accordance with the varying signal - don;t know what's so difficult to understand about this?

 

You will find many more such multitone FFTs of DACs here (it's in russian but translate is fine)

 

For instance two more plots to show that the 'grass' is different between DACs

image.thumb.png.b9363d28c0f28daf67b7712fdbcdeea7.png

image.thumb.png.4d799da1891c087047585559feca1779.png

Makes interesting reading but don't know how well it correlates to audibility?

 

The full set of DAC test measurements (including 30 mutitone tests at various signal levels) for many DACs & more is here http://reference-audio-analyzer.pro/en/report.php

 

 

Noise modulation, anybody?

Link to comment
Just now, mansr said:

It's possible they knew exactly what they were doing and intentionally designed a substandard device.

 

Sure.  Also possible they know more about designing audio equipment than you do, which I don’t say to be nasty, just pointing out the possibility.  Mr. Martin used to post here, and he was quite objective and enlightening while managing to keep a fairly low profile.  I miss his contributions greatly.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment

Haha, I see Amir is trying to answer my points by proxy on his forum here

 

Maybe I should answer by proxy - just this once?

 

He states that in MP3 pre-echo distortion is "detecting fidelity of lossy compression, we do not use measurements." I wonder why when a measurement of such can unequivocally show such issues (although he doesn't show any measurements of this) - his FFT is showing all the frequency differences between WAV & 64Kbit MP3 - it doesn't show anything about pre-echo

 

He follows in another post with more misinformation about DAC glitches, not knowing what they are or even reading the link I gave in the post of mine he quoted. It's amusing really

 

Such is the lack of knowledge, level of misinformation & rush to judgement that passes on Audio SCIENCE review O.o

 

 

Link to comment

Just to show Amir really doesn't know his stuff - I mentioned above the FFT he posted of a recording of a castanet & its MP3 version overlaid on it. Well here it is  - the red is the original castanet recording & the yellow is the MP3

upload_2017-8-3_18-31-38-png.8049

He posted this in some confused effort to show that pre-echo in MP3 is shown on this FFT  (& the implied conclusion that his FFTs show any dynamic noise changes).

 

So we are shown an FFT of a castanet - a high frequency transient & he says this about it " What is material is that at lower frequencies the two graphs do not at all look identical. The curves deviate as they should" So this transient has frequency elements below 100Hz & these are below -90dB & we are expected to believe that this is audible as the smearing of transients in pre-echo?

 

He ignores MP3s compression method that removes elements of the sound which it deems psychoacoustically disposable. This FFT shows this compression as the changes in the frequency not just the change from pre-echos. What he is attempting to fool us (& maybe himself) with is that the changes we see in this FFT are solely down to the pre-echo introduced by MP3.

 

He mentions that this pre-echo " the distortion is much more visible in time domain " but doesn't present the time domain FFT

 

These basic errors are rife in his post & represent the type of technical misinformation he continually engages in. Based on his stated work role " For a decade I managed the signal processing group at Microsoft " this sort of misinformation, in what should be his area of expertise, is unforgivable & reeks either of great incompetence or huge disingenuousness.

Link to comment

From an anonymous source via Mercman:

 

Arrrrgh!

 

Spectrum analyzers and other test gear (oscilloscopes, network analyzers, etc.) have used averaging for decades.  The intent is to remove noise from the display so that little details can be more easily observed.  In the case of audio spectrum analysis, averaging lets you see itty-bitty distortion products that are often buried in the noise.  You know - the ones that certain folks insist can’t be heard.  (Maybe they can’t - some people are just more militant about saying so than others.)

 

In general, the way this is done is by combining the results of several sweeps or equivalent FFT captures and mathematically averaging the results.  Hence the name.  The idea is that repetitive signals like sine waves will remain constant over those sweeps but noise, being random, will not be.  The noise will shrink to its average value and the desired signal will stick out.  The more sweeps or FFTs you average, the greater the noise rejection.  This works especially well on so-called “white” noise that is truly random.  The average function will display the average value of the noise floor if there is any shape to it.  You can see the shape of the 1/f noise sidebands caused by certain forms of phase noise, for example.

 

That’s all great, but it really is misleading in the overall scheme of things.

 

Why is that?

 

Using averaging tends to get rid of random events, like maybe computer generated garbage that isn’t coherent with the sound content.  That’s the point of averaging.  

 

You could have a pile of crap that’s 30 dB above the system noise floor for one sweep, and if you average enough times over enough sweeps its contribution to the averaged measurement shrinks to nothing because it only happened once.  If the frequency content for a second pile of crap is different from the first and only happens for a single sweep, it gets averaged out.  And so on.  That make sense?  If the pile of crap is consistent over a lot of sweeps, it will get averaged to something that is displayed.  Averaging pulls out what is consistent from sweep to sweep or FFT to FFT.

 

Actual playback of music and other sound isn’t averaged by your ears and brain!  The content changes constantly.  Isn’t that the point?  You hear those changes.  That’s not some secret psycho-acoustic consideration.  It’s just reason.

 

Try turning the Averaging function off.  Enable Max or Peak Hold (depends on the test instrument) and let the instrument capture sweeps for a few seconds.  That should capture most of the random events that take place over that time frame.  Now, report back.

 

Two notes:  

 

The argument that music is composed of sine waves and therefore this doesn’t apply is just wrong.  Music is composed of sine waves, but random and pseudo-random events can be broken down that way as well with an associated frequency spectrum.  This is a measurement consideration.

 

This problem is hardly unique to audio.  Engineers and their buddies use different spectrum analyzers to measure performance of various communications systems.  Wireless and CATV are just two examples.  The signals used in those services are complex modulation waveforms like QAM and OFDM.  (Google those terms if you like.)  Aside from looking at the spectral characteristics, these analyzers can also demodulate those complex waveforms and display the information as data constellations for each symbol transmitted.  It’s customary to average these constellations over a lot of symbols to reduce the effects of random noise.  Just like Amir is doing.  There is a measurement of overall signal called MER.  (Modulation Error Ratio - try Google again)  MER is kind of like SNR, except that it adds in the effects of some other signal degradations in addition to random noise.  It’s used as the benchmark for system performance.  If you sit and watch the constellation display over time, you often see it explode (in a Star Wars kind of way) every so often in most systems.  That indicates that a symbol - sort of equivalent to a sweep of the Audio Precision, except its tied to the actual content - had some problem with it.  You’d think that would affect the MER.  It does, but not by much.  If you completely lose one symbol out thousands, the MER only goes down by a fraction of a dB.  That’s just the way averaging works.  But, if you try doing a Bit Error Rate (BER) test you find that you lost bits.  This despite what the MER would imply.

 

Steve Plaskin

Link to comment

What do folks make of the fact that Amir's FFT tests do pick up the dynamic noise products that do make it into the output at the HDMI ports (compared to the cleaner output of the other digital ports)? Do folks think that indicates that dynamic noise products can be measured via FFT?

 

Also, I am puzzled as to why @mmerrill99 and some others repeatedly seem to impugn Amir's motives (and in the case of mmerrill, to question @pkane2001's motives just for asking questions), rather than running their own tests. If you have a theory that a different, better test will turn up the deviations and noise products that the FFT tests haven't shown, why not put that theory into action?

Link to comment
48 minutes ago, Mercman said:

From an anonymous source via Mercman:

 

Arrrrgh!

 

Spectrum analyzers and other test gear (oscilloscopes, network analyzers, etc.) have used averaging for decades.  The intent is to remove noise from the display so that little details can be more easily observed.  In the case of audio spectrum analysis, averaging lets you see itty-bitty distortion products that are often buried in the noise.  You know - the ones that certain folks insist can’t be heard.  (Maybe they can’t - some people are just more militant about saying so than others.)

 

In general, the way this is done is by combining the results of several sweeps or equivalent FFT captures and mathematically averaging the results.  Hence the name.  The idea is that repetitive signals like sine waves will remain constant over those sweeps but noise, being random, will not be.  The noise will shrink to its average value and the desired signal will stick out.  The more sweeps or FFTs you average, the greater the noise rejection.  This works especially well on so-called “white” noise that is truly random.  The average function will display the average value of the noise floor if there is any shape to it.  You can see the shape of the 1/f noise sidebands caused by certain forms of phase noise, for example.

 

That’s all great, but it really is misleading in the overall scheme of things.

 

Why is that?

 

Using averaging tends to get rid of random events, like maybe computer generated garbage that isn’t coherent with the sound content.  That’s the point of averaging.  

 

You could have a pile of crap that’s 30 dB above the system noise floor for one sweep, and if you average enough times over enough sweeps its contribution to the averaged measurement shrinks to nothing because it only happened once.  If the frequency content for a second pile of crap is different from the first and only happens for a single sweep, it gets averaged out.  And so on.  That make sense?  If the pile of crap is consistent over a lot of sweeps, it will get averaged to something that is displayed.  Averaging pulls out what is consistent from sweep to sweep or FFT to FFT.

 

Actual playback of music and other sound isn’t averaged by your ears and brain!  The content changes constantly.  Isn’t that the point?  You hear those changes.  That’s not some secret psycho-acoustic consideration.  It’s just reason.

 

Try turning the Averaging function off.  Enable Max or Peak Hold (depends on the test instrument) and let the instrument capture sweeps for a few seconds.  That should capture most of the random events that take place over that time frame.  Now, report back.

 

Two notes:  

 

The argument that music is composed of sine waves and therefore this doesn’t apply is just wrong.  Music is composed of sine waves, but random and pseudo-random events can be broken down that way as well with an associated frequency spectrum.  This is a measurement consideration.

 

This problem is hardly unique to audio.  Engineers and their buddies use different spectrum analyzers to measure performance of various communications systems.  Wireless and CATV are just two examples.  The signals used in those services are complex modulation waveforms like QAM and OFDM.  (Google those terms if you like.)  Aside from looking at the spectral characteristics, these analyzers can also demodulate those complex waveforms and display the information as data constellations for each symbol transmitted.  It’s customary to average these constellations over a lot of symbols to reduce the effects of random noise.  Just like Amir is doing.  There is a measurement of overall signal called MER.  (Modulation Error Ratio - try Google again)  MER is kind of like SNR, except that it adds in the effects of some other signal degradations in addition to random noise.  It’s used as the benchmark for system performance.  If you sit and watch the constellation display over time, you often see it explode (in a Star Wars kind of way) every so often in most systems.  That indicates that a symbol - sort of equivalent to a sweep of the Audio Precision, except its tied to the actual content - had some problem with it.  You’d think that would affect the MER.  It does, but not by much.  If you completely lose one symbol out thousands, the MER only goes down by a fraction of a dB.  That’s just the way averaging works.  But, if you try doing a Bit Error Rate (BER) test you find that you lost bits.  This despite what the MER would imply.

 

 

Averaging will obscure any signal that is changing randomly and rapidly in frequency, true. That's why I said earlier, I don't know if Amir is using averaging or not. I can turn off averaging in common software tools I use, including free ones.  Audio Precision analyzer Amir is using can be set to average between 1 and 4096 samples, so averaging is easy to turn off.

 

Of course, the bigger point was being made here that FFTs are unable to capture such events, and that is patently untrue.

 

Link to comment
Guest
This topic is now closed to further replies.



×
×
  • Create New...