Jump to content
IGNORED

On the subject of "ringing"


semente

Recommended Posts

On 13/02/2018 at 11:09 AM, Miska said:

 

For PCM yes, "asymFIR". And on my TODO-list for SDM outputs too. From my personal listening experience point of view, I feel it is more like "bad of both worlds" instead of "good of both worlds".

 

 

What is it exactly that you don't like?

I've spent a bit comparing all three FIR filters and I don't dislike the asymFIR, in fact it sounds quite natural to my ears.

I'm using the following tracks: The Peacocks by Bill Evans, Dance With Waves by Anouar Brahem and the 3rd movement of Cassadó's Cello Sonata by Janos Starker.

 

Unfortunately all three FIR filters sound somewhat unrefined when compared to the poly-sinc-xtr/xtr-mp family...

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
7 minutes ago, semente said:

What is it exactly that you don't like?

I've spent a bit comparing all three FIR filters and I don't dislike the asymFIR, in fact it sounds quite natural to my ears.

I'm using the following tracks: The Peacocks by Bill Evans, Dance With Waves by Anouar Brahem and the 3rd movement of Cassadó's Cello Sonata by Janos Starker.

 

Unfortunately all three FIR filters sound somewhat unrefined when compared to the poly-sinc-xtr/xtr-mp family...

 

To me, it didn't really come close enough to minimum-phase sonically, although it is naturally quite far from linear-phase.

 

Those FIR filters are more similar constructs to DAC chip filters, although still better.

 

I already have partial implementation of those asymmetric filters for poly-sinc -family (for almost a year), but encountered a technical problem and moved to other things meanwhile, so the functionality is not complete yet...

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
1 minute ago, Miska said:

 

To me, it didn't really come close enough to minimum-phase sonically, although it is naturally quite far from linear-phase.

 

Those FIR filters are more similar constructs to DAC chip filters, although still better.

 

I already have partial implementation of those asymmetric filters for poly-sinc -family (for almost a year), but encountered a technical problem and moved to other things meanwhile, so the functionality is not complete yet...

 

 

I listen almost exclusively to acoustic music and up to now have preferred poly-sinc-short. It was only after I compared xtr LP with xtr MP that I felt like hearing what something in between would sound like.

Thanks for all the efforts.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
  • 2 weeks later...
On 2/19/2018 at 3:17 PM, Miska said:

I already have partial implementation of those asymmetric filters for poly-sinc -family (for almost a year), but encountered a technical problem and moved to other things meanwhile, so the functionality is not complete yet...

 

I hope you will release them soon Jussi!  :D

I generally use the polysinc-short filter.  I am not partial to the current minimum phase version.  Really would love an intermediate phase filter. 

Link to comment
  • 3 weeks later...
On 2/10/2018 at 7:40 PM, buonassi said:

The next biggest tell is in an instrument that covers both high and low frequency in the same transient:  The bass kick drum.  A kick drum should have a click or snap of high mid frequency (when the batter head strikes the drum head) followed by a decay of bass and sub bass as the note decays.  If the transients are aligned using a linear phase filter, what you hear is a slight "chuffing" of the sub bass coming in just before the click of the batter head.  Some call this "ramp up".

 

I think my understanding has matured somewhat here.  Looking for others to validate this if possible.  I wrote the above a little over a month ago and have been racking my brain to understand the MP vs LP differences, both in critical listening and in reading.

 

What I think at this point is that I prefer minimum phase because I actually like the phase distortion.  It is the phase distortion that is separating the attack "smack" of the drum verses the resonance of the drum.  They are no longer perfectly aligned, and therefore the smack is easier to hear.

 

But I got the order wrong.  I'm pretty sure after reading that the HFs actually arrive shortly after the LFs when a MP filter is used, so what I'm likely hearing (and liking) is the separation of the drum attack vs its resonance, regardless of which comes first.  Because these transients happen so fast, my brain can't really tell if the smack is coming before or after the kick drum's bass resonance.  All my brain hears is better separation, and therefore better perceived attack, even if the smack is happening shortly after the bass bloom.

 

So much for my "chuffing" comment.  Gotta eat some crow on that.....

Link to comment
18 hours ago, buonassi said:

What I think at this point is that I prefer minimum phase because I actually like the phase distortion.  It is the phase distortion that is separating the attack "smack" of the drum verses the resonance of the drum.  They are no longer perfectly aligned, and therefore the smack is easier to hear.

 

This is quite interesting because to me it sounds like a (special) "effect", and I don't like it because it sounds less real or natural with the kind of (acoustic) music I listen to. But I found the 'ASYM' filter an "interesting" alternative to 'LP'.

 

@Miska does recommend 'LP' for acoustic music and 'MP' for pop/rock for a reason.

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment
On 2/10/2018 at 11:50 PM, Spacehound said:

There is no such thing as a 'positive' blind test and no 'correct' or 'expected' answer.

 

All the tests are is 'can you tell ANY difference'? It's not 'Can you tell THE difference'.

Thus it is 'binary'  (yes or no) so  100% objective as unless you CAN hear a difference you CAN'T have a 'preference'

what if a person was told there are details coming from a set of speakers they can't hear.

that person could prefer those details exist from the system despite ever hearing them, for wanting the details to exist for other people or simply to exist in the air space of their presence together.

 

then it would thus be 'needing to know' to hold a preference rather than 'needing to hear'.

Link to comment

pre ringing to me is when the quantified airspace of what the convolver frames has a gap with the frame of normal allocated space.

that can only happen with say 16bit audio going through a 24bit convolver, because thanks to the size difference between the two there is room for a gap (doesn't work/happen when staying within the same bitdepth).

 

or another way to subjectively comprehend what i am attempting to say.

imagine you've got an audio track with phase that varies from 1 through 180 degrees.

if you run the audio file through a filter that process all of the phase down 3 degrees, there will be some spots in the audio file that are -2 & when those samples attempt to play from the dac, it clips causing a tiny squelch sound we perceive as pre-ringing.

it's because going beyond 0 isn't possible as the frame only exists as 0-360 (or cut it up more than 360 if you want).

it isn't the convolver's fault - it is the fault of the processing scheme done to the impulse response file.

Link to comment

lol, it isn't the impulse response file's fault either.

 

i like to think about quantization & audio, because i know what must be stripped away from the audio at lower bitdepths is the added sound of air, which sounds like spaciousness that can re-create an atmospheric pressure to re-live the night (minus the smells, though that's just another bitdepth problem).

 

again,

we started with midi sounds & those sounds come directly in front of the microphone - there is zero air of distance between instrument & microphone.

then increase the bitdepth & you get quantified frame space of allocated air.

it translates to a literal distance limit & thus the distortion leaving that allocated space trails off.

but this brings us to dither.

the reason why dither gets confused is because of the act of going from a higher bitdepth to a lower bitdepth & people end their thought or discussion there.

it doesn't matter if the distortion trails off as logarithmic or linear fashion, all that matters is the data sets are known.

if i started off at a bitdepth that was 1024, i could record a single click & the dithering would traverse that bitdepth from 1024 all the way down to 16bits & it would be the same exact 16bit sample if i re-ran the process of lowering the bitdepth from 1024 down to 16bits time and time again.

that means i can also dither up from 16bits up to 1024 with 100% accuracy every single time.

the same can be done for video data too, so don't lose those photos simply because the resolution is low.

 

**edit**

oh, the reason why i like to think about it is because - well let me give you an example.

imagine i wanted to pull a prank by taking a photo of the entertainment center & then removing it only to replace it with a full sized printed picture that gives the 3d illusion of the entertainment center still being there - but there's something wrong that is costly.

 

if that picture had plus or minus the truth of light from the ceiling fan hitting it, the 3d illusion would appear fake.

but imagine if they were low on red ink & offset the entire picture by some shade of red - then totally replaced all of the red by using a little red light added to the ceiling fan.

it could be done & it isn't entirely impossible, as long as the light absorption of the paper is included in the calculation of the final 3d rendering that is adjusted for the little red light being shined onto it.

 

i think of the same when removing air using quantization distortion & then putting all or some of the air back into the final rendering - simply because it is going to be there in the room anyways.

 

if you know the hue of the saturation, you can process the track offset with the opposite color & then when the two ends meet - the objective final color is met.

that goes the way of using higher bitdepths & bands traveling to specific areas because the atmospheric pressure adds to the reign of the song.

(oh, don't forget time of year too)

Link to comment
9 hours ago, anwaypasible said:

pre ringing to me is when the quantified airspace of what the convolver frames has a gap with the frame of normal allocated space.

that can only happen with say 16bit audio going through a 24bit convolver, because thanks to the size difference between the two there is room for a gap (doesn't work/happen when staying within the same bitdepth).

WTF?

 

9 hours ago, anwaypasible said:

imagine you've got an audio track with phase that varies from 1 through 180 degrees.

What is that supposed to mean?

Link to comment
4 hours ago, mansr said:

WTF?

 

What is that supposed to mean?

I think it means something in an alternate universe.  This one not so much.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality. 

Link to comment
6 hours ago, mansr said:

What is that supposed to mean?

 

Perhaps the composer of such an audio track would just be going through a phase.

One never knows, do one? - Fats Waller

The fairest thing we can experience is the mysterious. It is the fundamental emotion which stands at the cradle of true art and true science. - Einstein

Computer, Audirvana -> optical Ethernet to Fitlet3 -> Fibbr Alpha Optical USB -> iFi NEO iDSD DAC -> Apollon Audio 1ET400A Mini (Purifi based) -> Vandersteen 3A Signature.

Link to comment
On 3/22/2018 at 6:46 PM, semente said:

 

This is quite interesting because to me it sounds like a (special) "effect", and I don't like it because it sounds less real or natural with the kind of (acoustic) music I listen to. But I found the 'ASYM' filter an "interesting" alternative to 'LP'.

 

@Miska does recommend 'LP' for acoustic music and 'MP' for pop/rock for a reason.

 

The ASYM is for asymmetrical , meaning "intermediate" phase right?  I've had pretty good results with this type of a filter as well.

 

I started going back and reading some of @Miska's older posts about his filters in hq player.  What I came across confused me even more.  Originally, I believed it to be the case that low frequencies were delayed realtive to the highs in a minimum phase filter-because that's what I was hearing.  Then, I was pointed in other posts that I may have it backward, it's the highs that are delayed relative to the lows.  Now, after reading this I'm back to believing the lows are delayed, which sounds more natural to me (bass kick drum example).

 

"Linear phase filter" is a filter where all frequencies pass with same time delay. "Minimum phase filter" is a filter where all frequencies pass through as fast as possible, higher frequencies faster than lower ones." https://www.computeraudiophile.com/forums/topic/13071-hqplayer-resampling-filter-setup-guide-for-ordinary-person/?do=findComment&comment=175928

 

 

Maybe an expert on the subject can just put this to bed once and for all.  From this statement I'm inclined to believe the lows are being delayed and arrive at the eardrum shortly after the highs do.  So which is it?  Highs or Lows delayed when using minimum phase?

Link to comment
9 hours ago, buonassi said:

 

@MiskaThe ASYM is for asymmetrical , meaning "intermediate" phase right?  I've had pretty good results with this type of a filter as well.

 

 

I think so. At least from my understanding of Miska's description.

@Archimago writes about it here:

 

Impulse-Annotated.png

 

http://archimago.blogspot.co.uk/2018/01/musings-more-fun-with-digital-filters.html

 

9 hours ago, buonassi said:

I started going back and reading some of @Miska's older posts about his filters in hq player.  What I came across confused me even more.  Originally, I believed it to be the case that low frequencies were delayed realtive to the highs in a minimum phase filter-because that's what I was hearing.  Then, I was pointed in other posts that I may have it backward, it's the highs that are delayed relative to the lows.  Now, after reading this I'm back to believing the lows are delayed, which sounds more natural to me (bass kick drum example).

 

"Linear phase filter" is a filter where all frequencies pass with same time delay. "Minimum phase filter" is a filter where all frequencies pass through as fast as possible, higher frequencies faster than lower ones." https://www.computeraudiophile.com/forums/topic/13071-hqplayer-resampling-filter-setup-guide-for-ordinary-person/?do=findComment&comment=175928

 

 

Maybe an expert on the subject can just put this to bed once and for all.  From this statement I'm inclined to believe the lows are being delayed and arrive at the eardrum shortly after the highs do.  So which is it?  Highs or Lows delayed when using minimum phase?

"Science draws the wave, poetry fills it with water" Teixeira de Pascoaes

 

HQPlayer Desktop / Mac mini → Intona 7054 → RME ADI-2 DAC FS (DSD256)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...