Jump to content
Computer Audiophile
Superdad

Amir misses the point again: Looks for the music in the noise.

Rate this topic

Recommended Posts

1 minute ago, lmitche said:

I remember telling Amit that an FFT wouldn't cut it after his first bigus measurements of the Amber Regen. He either didn't get it or only has a hammer and the whole world looks like a nail.

Or an agenda?

 

But if you look at the members posting on ASR, you see they all take this latest FFT measurements as showing all there is to be seen - not one questions what they are looking for & how to go about revealing/measuring it.

 

I find it ironic that the forum has "audio science" in the title :D

Share this post


Link to post
Share on other sites

Hi tmtomh - All good questions, to which I can only make educated guesses. 

 

Based on the test with the Schiit DAC, we can say that something (noise or whatever) can make it through to the analog outputs of the DAC, and this is improved by the ISO Regen. Whether or not this is audible, I have no idea. 

 

Now we enter the realm of what it means to be properly designed.This will be debated until the world ends. 

 

I hate to say it, but Amir and Alex won't be exchanging holiday cards this year. Thus, the conclusion below:

 

23 minutes ago, tmtomh said:

 if a DAC does output something different with the ISO regen in the chain, then there's something deficient about the design of the DAC

 

is very skewed. Can we say this about all products that benefit from any other product outside of the said product. What I mean is, if products like the ISO Regen, power conditioners, isolation for turntables, etc... have an effect on a product, should we conclude that the product is deficient? I lean toward no, but this is just me. If someone likes the sound of the Schiit DAC with a Regen more than other DACs that are similar in price to the total package, I really don't care and I suspect most people don't care. 

 

This is an interesting topic and one that I'd like to see investigated. I think Amir's conclusions are pretty subjective and swayed by his disdain for many products and types of products, but his graphs are objective and enable others to reach their own conclusions. 

 

I remember talking with DAC manufacturers who were adamantly against putting isolation in their USB DACs because they saw other downsides to this. Or, they didn't want to power the USB chip with an internal PSU, instead opting for the 5V PSU of the incoming USB signal. We are talking about very high end equipment and very competent engineers who invented some of the tech.

 

I think your balanced approach to asking the questions is the only way we'll get somewhere and educate all of us. 

 

 

 

 

Share this post


Link to post
Share on other sites
5 minutes ago, lmitche said:

I remember telling Amit that an FFT wouldn't cut it after his first bogus measurements of the Amber Regen. He either didn't get it or only has a hammer and the whole world looks like a nail.

Even Esldude (Blumlein) had asked him  to do some FFTs at different signal levels as a means of getting a rudimentary handle on noise modulations in DACs - he did so once, I think?

Share this post


Link to post
Share on other sites
5 minutes ago, mmerrill99 said:

Or an agenda?

 

But if you look at the members posting on ASR, you see they all take this latest FFT measurements as showing all there is to be seen - not one questions what they are looking for & how to go about revealing/measuring it.

 

I find it ironic that the forum has "audio science" in the title :D

 

FFT analysis of analog output has been used for many years in the industry.  It's not surprising that it is the default for most measurements of FR, jitter, THD, IMD. 

 

What I believe you are conjecturing is that there's a component created by the USB connection that introduces quickly (randomly?) changing frequencies that are invisible to standard FFT analysis. Can you please elaborate as to why FFT can't capture this component? 

 

 

Share this post


Link to post
Share on other sites
tmtomh   
24 minutes ago, mmerrill99 said:

Well that's what we might try to discuss here? I asked for any suggestions in my post

But the first step in trying to find ways of measuring this dynamic noise is to recognize what doesn't work & FFTs like Amir has done don't work for measuring this.

 

It's not going to be easy, I would imagine:

a) Because differentiating signal from noise on a DAC's output is not easy. Diffmaker has been used before to do this input Vs output comparison but I don't know if it's sufficiently sensitive enough - it seems very flaky software to me

b) we don't know at what amplitude this modulating noise will be seen? 

 

 

Thank you for your quick reply.

 

I agree with you that it will not be easy to design such a test. And this is what I was kind of afraid of: If you're talking about dynamic noise, I assume you're also talking about noise that does not manifest itself with test tones but rather with real, complex musical information passing through the system - yes?

 

If my assumption is correct (and again, please tell me if it's not), then noise is likely to be extremely hard to measure because of the dynamism and complexity of the musical signal. I guess one way to deal with that would be to digitally capture the output of the DAC repeatedly and see if we could time-align the captures and then try to null them out to see what differences remained. But of course we'd first have to see how closely various trials matched with the exact same configuration - in other words, a digital capture of an analogue source is likely to be very similar over repeated trials, but likely not actually identical. 

 

So we'd have to see if we could determine a baseline level of variation in the signal across multiple trials/captures. Then we could do the same thing with the ISO Regen (or some similar device) inserted into the chain, and see if it produced a difference greater than the normal variation from capture to capture.

 

I agree this would be difficult to do. But my larger concern is that I don't see how such a test would really settle anything, because as @The Computer Audiophile says (or at least strongly implies) above, people still could argue that any differences were too small to be audible; or conversely if no significant differences were found with and without the ISO Regen, then other folks would argue that the differences were simply too small, or of the wrong kind, to be detected by the equipment or the test protocol.

 

Of course if people want to try out equipment they should. And if they consistently hear a difference with it, then that increases their musical enjoyment and I am glad for that. But that said, if one posits a difference but that difference cannot actually be tested because (A) the difference is too small to be detected by available equipment, or (B) we have no hypothesis for what exactly to test for or look for - then in that case I remain a skeptic. Not an etched-in-stone skeptic, but a skeptic pending new information or insight.


Finally, I would be very interested in a response to @pkane2001's question about why exactly the FFT measurement is inadequate. Thanks!

Share this post


Link to post
Share on other sites
2 minutes ago, pkane2001 said:

 

FFT analysis of analog output has been used for many years in the industry.  It's not surprising that it is the default for most measurements of FR, jitter, THD, IMD. 

 

What I believe you are conjecturing is that there's a component created by the USB connection that introduces quickly (randomly?) changing frequencies that are invisible to standard FFT analysis. Can you please elaborate as to why FFT can't capture this component? 

 

 

OK, in simplistic terms, FFTs perform their 'magic' by mathematically doing many averages which have the nice effect of amplifying any signal which occurs at the same frequency - these will appear as spikes on the FFT.

 

As a result of this averaging, the background 'Noise' is apparently reduced to what is seen as the 'grass' on an FFT from which the spikes emerge.

 

I've seen this described as a long exposure photograph - the stuff that stays stationery is clear & bright in the photo, the stuff that moves around is fuzzy & dim

 

So if this noise is continually changing how do you think it will be seen on an FFT? (an FFT is just s short sample of the signal) - It will just show as 'grass' on the FFT

 

Now Amir's FFTs are of a single tone 11KHz or whatever so there's no modulation of noise as the signal is not a dynamic one - it's not changing

 

What esldude asked Amir to do was to take FFTs for different amplitudes of this 11KHz tone to see if there are any shifts in the noise floor. Now this is a very basic test which in all likelihood won't show the effects of the noise generated by the DAC processing of a dynamically changing signal

Share this post


Link to post
Share on other sites
2 minutes ago, tmtomh said:

 

I agree with you that it will not be easy to design such a test. And this is what I was kind of afraid of: If you're talking about dynamic noise, I assume you're also talking about noise that does not manifest itself with test tones but rather with real, complex musical information passing through the system - yes?

Well the noise is conjectured to be the result of the current draw that is dynamically changing based on the dynamically changing signal being processed.

So, yes!

 

5 minutes ago, tmtomh said:

I agree this would be difficult to do. But my larger concern is that I don't see how such a test would really settle anything, because as @The Computer Audiophile says (or at least strongly implies) above, people still could argue that any differences were too small to be audible; or conversely if no significant differences were found with and without the ISO Regen, then other folks would argue that the differences were simply too small, or of the wrong kind, to be detected by the equipment or the test protocol.

Ok, this is the other problem encountered - assumptions about what's audible & what's not are based on pretty old audiology tests which tended to use clicks & tones. Remember, I'm not talking about stuff that is of itself audible but rather it's effects are audible - just as in the video I posted of pre-echo - the pre-echo isn't audible but its effects certainly are.

 

So again, dismissing measurements based on such audibility assumptions is a red herring, IMO.

 

And remember, the type of improvements being reported for these types of USB devices is all about the effects on soundstage, clarity, solidity  etc i.e. its effect on the playback sound, not that it removed some audible noise or sound that was discordant before using the USB device. 

Share this post


Link to post
Share on other sites
25 minutes ago, mmerrill99 said:

OK, in simplistic terms, FFTs perform their 'magic' by mathematically doing many averages which have the nice effect of amplifying any signal which occurs at the same frequency - these will appear as spikes on the FFT.

 

As a result of this averaging, the background 'Noise' is apparently reduced to what is seen as the 'grass' on an FFT from which the spikes emerge.

 

I've seen this described as a long exposure photograph - the stuff that stays stationery is clear & bright in the photo, the stuff that moves around is fuzzy & dim

 

So if this noise is continually changing how do you think it will be seen on an FFT? (an FFT is just s short sample of the signal) - It will just show as 'grass' on the FFT

 

Now Amir's FFTs are of a single tone 11KHz or whatever so there's no modulation of noise as the signal is not a dynamic one - it's not changing

 

What esldude asked Amir to do was to take FFTs for different amplitudes of this 11KHz tone to see if there are any shifts in the noise floor. Now this is a very basic test which in all likelihood won't show the effects of the noise generated by the DAC processing of a dynamically changing signal

 

FFT is just a frequency-domain representation of time-domain data. Done correctly, FFT followed by an inverse FFT should produce exactly the original data. Nothing should be lost, including random spikes.

 

I get your point that if the FFT is performed on a large enough time period, very quick spikes that are aperiodic may disappear on the display. Wouldn't the solution, then, to analyze as small a time frame window as possible? And, perhaps, look at the actual data rather than at the very compressed, low-res screen display of the spectrum?

 

I'm not sure that very expensive equipment is required to do this type of analysis. Record a sinewave from the output of a DAC using a high-quality, hi-res ADC, and then use a simple FFT routine to analyze very short time frames by reading portions of the recorded file. Should be easy enough to prove or disprove the existence of these spikes.

 

Or are you saying that these spikes are below noise floor? 

Share this post


Link to post
Share on other sites
54 minutes ago, pkane2001 said:

 

FFT is just a frequency-domain representation of time-domain data. Done correctly, FFT followed by an inverse FFT should produce exactly the original data. Nothing should be lost, including random spikes.

 

I get your point that if the FFT is performed on a large enough time period, very quick spikes that are aperiodic may disappear on the display. Wouldn't the solution, then, to analyze as small a time frame window as possible? And, perhaps, look at the actual data rather than at the very compressed, low-res screen display of the spectrum?

 

I'm not sure that very expensive equipment is required to do this type of analysis. Record a sinewave from the output of a DAC using a high-quality, hi-res ADC, and then use a simple FFT routine to analyze very short time frames by reading portions of the recorded file. Should be easy enough to prove or disprove the existence of these spikes.

 

Or are you saying that these spikes are below noise floor? 

 

I believe there are a number of underlying problems with FFT:

-  the test signal used is usually a single or two tone frequency i.e it isn't a dynamically changing signal - exactly the opposite of what we want in order to try to test the hypothesis. Look at Amir's testing using FFTs

- these tones don't exercise the DAC's circuitry in the same way as music - the tones spend hardly any time at the crossover point of the signal whereas music spends most of it's time hovering around this crossover point

- FFTs typically use a short sample of the signal & assumes that it is repeating to infinity - one can see the pitfalls in this!

 

Multitone signals (30 or more frequencies) have been used to closer match the music signal & some progress has been made in this

Share this post


Link to post
Share on other sites
41 minutes ago, lmitche said:

Here is an overly simple, but useful, explanation of FFTs and comparison with a newer and potentially useful analysis tool.

 

https://georgemdallas.wordpress.com/2014/05/14/wavelets-4-dummies-signal-processing-fourier-transforms-and-heisenberg/

 

Wavelets analysis is not very new. If FFT is done using running a fixed-size window function over the signal, wavelets do something similar, except using multiple different window sizes applied to the same data, resulting in a multi-scale analysis. Can be very useful for extracting large- or small-scale features from the signal.

Share this post


Link to post
Share on other sites
32 minutes ago, mmerrill99 said:

 

I believe there are a number of underlying problems with FFT:

-  the test signal used is usually a single or two tone frequency i.e it isn't a dynamically changing signal - exactly the opposite of what we want in order to try to test the hypothesis. Look at Amir's testing using FFTs

- these tones don't exercise the DAC's circuitry in the same way as music - the tones spend hardly any time at the crossover point of the signal whereas music spends most of it's time hovering around this crossover point

- FFTs typically use a short sample of the signal & assumes that it is repeating to infinity - one can see the pitfalls in this!

 

Multitone signals (30 or more frequencies) have been used to closer match the music signal & some progress has been made in this

 

Except for your last point, these are not problems with FFT, but rather with testing methodology. Speaking of which, why would USB-related noise increase with a more complicated signal? After all, data is still a stream of 1's and 0's when going over USB, regardless of what signal is encoded in it.

 

Share this post


Link to post
Share on other sites
10 minutes ago, pkane2001 said:

 

Except for your last point, these are not problems with FFT, but rather with testing methodology. Speaking of which, why would USB-related noise increase with a more complicated signal? After all, data is still a stream of 1's and 0's when going over USB, regardless of what signal is encoded in it.

 

Right but are they not also issues with FFT?

Just because they are problems in other tests as well, doesn't give FFTs a free pass! 

The problem is that many interpret what they see on an FFT as fully defining the signal & are misled - exactly as we see here with Amir's FFTs

 

10 minutes ago, pkane2001 said:

Speaking of which, why would USB-related noise increase with a more complicated signal? After all, data is still a stream of 1's and 0's when going over USB, regardless of what signal is encoded in it.

I think we've been over this ground already & I answered it then - so I advise to go back & read my answers

 

But let's approach it a different way - you know about glitching in DACs right?

http://digital.ni.com/public.nsf/allkb/F1553EEAF787D5C486256CAE0069A323

Glitches occur due to the current draw changing when the DAC is going from handling all 0s to all 1s - the current draw suddenly jumps, causing noise to appear on the DAC's output.

 

Why would you think that other digital devices would be  immune to this?

Share this post


Link to post
Share on other sites
mansr   
21 minutes ago, pkane2001 said:

Wavelets analysis is not very new. If FFT is done using running a fixed-size window function over the signal, wavelets do something similar, except using multiple different window sizes applied to the same data, resulting in a multi-scale analysis. Can be very useful for extracting large- or small-scale features from the signal.

Wavelets can be seen as a generalised Fourier transform using other basis functions than the sine.

Share this post


Link to post
Share on other sites
25 minutes ago, mmerrill99 said:

Right but are they not also issues with FFT?

Just because they are problems in other tests as well, doesn't give FFTs a free pass! 

The problem is that many interpret what they see on an FFT as fully defining the signal & are misled - exactly as we see here with Amir's FFTs

 

A mathematical transformation that perfectly preserves data isn't the problem. Perhaps there's a problem in how it's used.

 

25 minutes ago, mmerrill99 said:

But let's approach it a different way - you know about glitching in DACs right?

http://digital.ni.com/public.nsf/allkb/F1553EEAF787D5C486256CAE0069A323

Glitches occur due to the current draw changing when the DAC is going from handling all 0s to all 1s - the current draw suddenly jumps, causing noise to appear on the DAC's output.

 

Why would you think that other digital devices would be  immune to this?

 

Glitching in a DAC is related to the analog conversion. Regen, USB cables, and DDCs don't have an analog side. I still don't see a correlation between the complexity of the signal that is carried in the bits and our ability to detect noise at the output of the DAC with an FFT. Analog sinewave encoded into bits generates as random a sequence of bits as a two tone signal, or 100 tone signal, so the noise should be present, regardless, if it's really there.

 

In any case, this is also not a real complication for doing FFT analysis. We can use a 100-tone signal instead of the 1-tone one as the input, if that makes the noise spikes more apparent.

 

Share this post


Link to post
Share on other sites
2 hours ago, The Computer Audiophile said:

Thus, the conclusion below:

3 hours ago, tmtomh said:

 if a DAC does output something different with the ISO regen in the chain, then there's something deficient about the design of the DAC

 is very skewed. Can we say this about all products that benefit from any other product outside of the said product. What I mean is, if products like the ISO Regen, power conditioners, isolation for turntables, etc... have an effect on a product, should we conclude that the product is deficient? I lean toward no, but this is just me.

I have the opposite viewpoint.  A well designed product should anticipate less than perfect real world conditions. In the industrial development lab, new products were tested under very harsh conditions.

There is no reason for any hi-fi product to not operate properly under commonly seen adversity.

Share this post


Link to post
Share on other sites
36 minutes ago, pkane2001 said:

A mathematical transformation that perfectly preserves data (regardless of signal content) isn't the problem. Perhaps there's a problem in how it's used.

As I said the issue is that people look at an FFT as showing the full information about a signal. I  don;t know what you don't understand about this? Does Amir & those who read him not think that ISO Regen is doing zilch based on his FFT?

Whatever excuse you make about where the fault lies -  the result is the same - these FFTs are over-valued

Do you think that Amir's FFT shows that the iso regen does nothing?

 

36 minutes ago, pkane2001 said:

Glitching in a DAC is related to the analog conversion.

You're stuck in this binary world-view - this is digital Vs this is analogue

Yes, it's RELATED to the analog conversion - the current draw happening as a result of handling the digital 1s & 0s effects & can be seen on the analogue output. So? This doesn't take away from the fact that it's the processing of the digital 1s & 0s causing the issues on the analog output

 

36 minutes ago, pkane2001 said:

Regen, USB cables, and DDCs don't have an analog side.

So? I'm not going back over what I & others already explained to you - it's as if you never heard it before. The waveform causing overshoot/ringing is one possible way that different noise profiles van be created.

 

Again, you have this view that digital is some sort of magical land where no interaction happens with the analogue side of things. They are systems & system thinking is need, not blinkered, silo-thinking

36 minutes ago, pkane2001 said:

I still don't see a correlation between the complexity of the signal that is carried in the bits and our ability to detect noise at the output of the DAC with an FFT.

Oh well

 

Yea, you ar e doing  lots of handwaving in your next bit

Share this post


Link to post
Share on other sites
5 minutes ago, Speedskater said:

I have the opposite viewpoint.  A well designed product should anticipate less than perfect real world conditions. In the industrial development lab, new products were tested under very harsh conditions.

There is no reason for any hi-fi product to not operate properly under commonly seen adversity.

 

I look at it this way, NHRA funny cars are built for speed. They don't corner well. Building a funny car to corner well will reduce its top speed, making it a poor performer in the ultra high end drag race world. 

Share this post


Link to post
Share on other sites
13 minutes ago, Speedskater said:

One of those funny car's crossed the finish line at 339.9 MPH last weekend.

 

But the task of a hi-fi product is to perform well under reasonably demanding conditions. If budget components can do it, all components should.  Or is someone suggesting that expensive products need perfect conditions to operate?

 

Some products require different conditions and as a result perform awesome. Spectral amps require MIT cables. The end result is terrific. 

Share this post


Link to post
Share on other sites
mansr   
6 minutes ago, The Computer Audiophile said:

I look at it this way, NHRA funny cars are built for speed. They don't corner well. Building a funny car to corner well will reduce its top speed, making it a poor performer in the ultra high end drag race world. 

I don't see anyone selling add-on widgets (or replacement fuel lines) promising to improve the corner handling of drag racing cars. 

Share this post


Link to post
Share on other sites
28 minutes ago, mmerrill99 said:

As I said the issue is that people look at an FFT as showing the full information about a signal. I  don;t know what you don't understand about this? Does Amir & those who read him not think that ISO Regen is doing zilch based on his FFT?

Whatever excuse you make about where the fault lies -  the result is the same - these FFTs are over-valued

Do you think that Amir's FFT shows that the iso regen does nothing?

 

You're stuck in this binary world-view - this is digital Vs this is analogue

Yes, it's RELATED to the analog conversion - the current draw happening as a result of handling the digital 1s & 0s effects & can be seen on the analogue output. So? This doesn't take away from the fact that it's the processing of the digital 1s & 0s causing the issues on the analog output

 

So? I'm not going back over what I & others already explained to you - it's as if you never heard it before. The waveform causing overshoot/ringing is one possible way that different noise profiles van be created.

 

Again, you have this view that digital is some sort of magical land where no interaction happens with the analogue side of things. They are systems & system thinking is need, not blinkered, silo-thinking

Oh well

 

Yea, you ar e doing  lots of handwaving in your next bit

 

Huh? You misunderstand what I'm trying to do here. I'm not arguing with you about Amir's methods. I'm not even arguing with you about your noise conjecture. I want to test it, and I'm asking questions to try to understand what would be an appropriate test. You seem to be on the defensive, I'm not sure why. Are you interested in testing your conjecture? If not, then there's no point in further discussion. 

Share this post


Link to post
Share on other sites
13 minutes ago, pkane2001 said:

 

Huh? You misunderstand what I'm trying to do here. I'm not arguing with you about Amir's methods. I'm not even arguing with you about your noise conjecture. I want to test it, and I'm asking questions to try to understand what would be an appropriate test. You seem to be on the defensive, I'm not sure why. Are you interested in testing your conjecture? If not, then there's no point in further discussion. 

I don't see you testing anything - just denying every point I make without any backup logic

FFTs for example - I made some points that we typically see as shortcoming in FFT testing. You reply that it's not an FFT issue.

 

It's fine being a devils' advocate but when all you bring to the debate is denying everything  that is said, I'm not really bothered

 

I'll ask you again & let see if you can get on a more positive side of this - do you think Amir's FFT shows that the iso Regen does nothing?

 

If not then how would you go about testing it?

 

What about testing the conjecture of current draw causing noise - how would you test this? (It's not MY noise conjecture - do you read John Swenson's posts?)

 

Your argument that the distribution of bits in a signal is random is ill informed & handwaving, as far as I'm concerned

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.

×