Jump to content

mitchco

  • Posts

    784
  • Joined

  • Last visited

  • Country

    Canada

Retained

  • Member Title
    Sophomore Member

Recent Profile Visitors

57082 profile views
  1. Note that HLC will loop through each filterbank and calculate the delay for each filter and display the latency samples for the filter with the most delay. HLC will automatically adjust the delta delay for each filterbank to match the filter with the longest delay. This is so when switching filters, regardless of minimum or linear or mixed phase filters, there is no delay in switching. So just load minphase FIR filters into the filterbanks and the latency samples should be 0 or 1 sample if using minphase digtial XO's. Save that as a minimum phase filtergraph. Then create a new filtergraph with your regular linear phase music filters and save that filtergraph. Now you can easily switch the two filtergraphs depending if your are listening to music or watching movies.
  2. Thanks Chris. Hi @storab HLC reports the filters latency in samples at the bottom right hand corner of the UI. To reduce the filter latency, generate a 2nd set of filters that are minimum phase FIR filters. This will reduce the latency to 0 milliseconds and HLC is a 0ms latency convolver. So the only delay will typically be the audio buffer size that you set in HLHost Audio Settings Dialog to the DM7. I have two presets, one for music with the linear phase FIR filters and one for movie watching which is the minphase FIR filters. Hope that helps.
  3. @Markus8 Yup, I have been in touch with David. Great to see FFD on Mac! I hope to update the review, but currently don't have the time or an ETA. Will circle back when I do.
  4. @kalpesh As I said back in 2018 at ASR, I am in total agreement. The fundamental issue is there are not too many well designed speakers. Certainly not compared to the "ideal" speaker. Using modern DSP/DRC with frequency dependent windowing, we can let low frequency room reflections into the measurement microphone and with DSP smooth out those room modes and non-minimum phase response. As we transition form room modes (i.e. waves) to rays we let less of the room reflections into the mic so we are only correcting the direct sound of the speaker and not the room reflections. In other words, above 800 Hz or so we are only using DSP/DRC as a tone control. Just like how a Klippel scanner can "window out" low frequency room reflections to measure a speaker, (SOTA) room correction s/w can "window out" the room's reflections above a user selected frequency. Unfortunately, this is not well understood and one of the main reasons I made this video to explain in detail. As far as the downward tilted response, Floyd explained here. I spent 10 years as a pro recording/mixing engineer in a variety of studios with many calibrated monitors, none of which had a flat response at the listening position. Unless one was deaf, it is way too bright sounding. For sure, some DRC s/w does indeed eq for flat, but it is not SOTA. As Sean Olive has pointed out in his presentation on The Subjective and Objective Evaluation of Room Correction Products. See slides 22, 23, and 24 to see why flat in-room response is not the preferred target. Yah, "target." While it would be nice if all speakers are well designed with rooms with favorable low frequency distribution of room modes and the right amount of diffusion and absorption. But for most of us, the reality is far from ideal. So we find and use the best tools available to deal with the reality. The reality is that for my clients that have compared partial corrections versus full range corrections (i.e. think tone control in the mids and highs = full range correction), over 95% choose the full range correction. Folks do have preferences when it comes to tonal response. Some like a bass bump, some like to tilt down the high frequencies a bit more than others. It is not about one size (i.e. target) fits all. It is about what one prefers while at the same time taking care of room modes and making the frequency response from both (or MCH) speakers equal so that there is actually a rock solid phantom center image. Smoothing the bass response so it is even and clear sounding. Make the timing response so all direct sound arrives at ones ears at the same time to improve the imaging. And have the timing response of each speaker the same over time, so that the depth of field opens up in a way most folks have not heard before. But I am biased :-)
  5. Just to clarify, Audiolense is a PC DSP/DRC FIR filter designer application that runs on Windows. However, the convolution filters that are generated are platform independent and can load in any convolver and run on any platform. Of course, if multichannel, the convolver must also support multichannel.
  6. Jeffrey, brings up a very important point that I hope he does not mind me expanding on. Considering the timing response of one's room and speakers is half of what makes for a great sounding system, particular care must be taken to get high quality "timing" measurements. The rule of thumb is to take 3 sets of measurements, back to back and inspect each channels delay. All 3 measurements should be consistent across like channels to within +- 0.02 milliseconds tolerance. Unfortunately, with USB mics, one typically sees millisecond differences across like channels. Which measurement is correct? The answer is none. We are trying to precisely synchronize the playback stream of the test signal with the recording stream from the mics input on a channel by channel basis. The issue is 2 clocks, one in the USB mic's ADC and the other in the DAC. While most measurement software can compensate for clock drift, it can't compensate for the variable start and stop times of the test sweep signal and input recording stream. While there are exceptions with USB mics in some setups, generally, the more channels, the worse the timing issues become. It does not matter what DAC is being used in this scenario as the issue is the two separate clocks that shall never meet timing wise. As Jeffrey says, any calibrated analog measurement mic from Earthworks or the iSEMcon EMX 7150 calibrated measurement microphone are SOTA and have a proven track record in the field. Professional interfaces from brands like Merging Technologies, Lynx Studio, RME and Motu have high quality mic preamps built-in (and ADC not only for the mic pre, but TT's or any other analog output that requires DRC). Since the ADC/DAC is under one clock, consistent and repeatable timing measurements are pretty much guaranteed. These devices also typically have excellent audio drivers that are bullet proof and reliable, as that is another source of measurement pain; poorly written audio drivers. Of course, there are exceptions and not an exclusive gear list. But do your research. A side benefit of most pro interfaces is that onboard digital loopback is supported. So for apps (or system wide audio) that don't have convolution can use the interface to loopback the digital signal through a standalone convolver and back out the DAC without having to install a virtual audio driver or another piece of h/w. Here is an example with Tidal app using digital loopback in the Motu mk5.
  7. Hey there Keith, great feedback. Just to clarify a few items. Acourate, Audiolense, and Focus Fidelity Designer produce SOTA FIR filters for DRC. But there is a reason for Audiolense for Chris's Atmos system which I will get to in a minute. While you may have seen Audiolense in action, it is not as automated as compared to Dirac or SoundID. There are no wizards to guide one through the process, so unless you know what you are doing, it can be just as opaque as Acourate. Also, there are several features that you may be unaware of in Audiolense since you did not bring them up (including varying XO width’s, VBA, etc). It is worth checking out in more detail. But here is the crux of the issue. Chris has 12 speakers to time align. You already know the manual effort required to time align a stereo triamp using Acourate. But 12 separate speakers to time align is a pile of work of a different sort in Acourate. Because of the number of manual steps required, the possibility of human error increases exponentially. So, for the first measurement round, the manual time alignment process needs to be repeated 3 times to verify and validate the time alignment. Now throw in bass management where each speaker has its own digital XO, with its own crossover point, and bass offloading (routing) to the sub, which means digital summing as well. In the end, that means Chris’s Atmos system require 23 convolvers. How would one do that in Acourate? Even if it were possible, it would take several hours, perhaps days of effort. Conversely, in Audiolense, it takes about 10 minutes to design the Atmos speaker setup, including digital XO’s and bass management (assuming one knows what they are doing). And another 5 – 10 minutes to take a 12-channel measurement, which also accounts for the time alignment as a result of the measurement process. We have not even got to designing a filter yet. I hope you can see that it is not just a matter of features or functionality, but simply being practical of what is doable and what isn't. Wrt my DRC calibration service, to clarify a point, at the end of the project, I produce a step-by-step video walkthrough showing folks how to replicate what I did after I do the heavy lifting. That way my clients can replicate what I did and can verify by comparing the final filter I designed with the replicated filter. I also point out areas of what to watch out for and areas for experimentation. It is not just the filters folks are paying for but also a knowledge transfer, so folks have a leg up on designing and developing their own filters in the shortest time possible. And maybe beating what I have done 😉 And while many folks that use my service are new to DSP/DRC, I also have intermediate and advanced users looking to squeak the last bit of performance of their system. Having calibrated over 250 systems of every type imaginable from around the world, And I learn something new with each one. I have considerable experience to knowledge transfer and happy to do so.
  8. Yes exactly. If you have a 20 dB peak to peak difference between a dip and peak, which is not unusual for most rooms with room modes, bass traps, even membranes or Helmholtz resonators, do next to nothing below 100 Hz to alleviate that. Even worse, they tend to suck out the response in the 200 to 300 Hz range, because now there is too much absorption in that range. Source: I worked for a acoustics company years ago and developed/manufactured bass traps, diffusers, etc., for recording studios. We even got a write up in Andrew Marshalls Audio Ideas from a very long time ago. See attached. Not following here... There are no overlapping sounds, unless you are talking about the overlap in the digital crossover...? Or are you talking about no crossover at all? The latter is not the best (or easiest) way to integrate subs with mains. Acourate's and Audiolense's digital xo's sum perfectly in both the frequency and time domain. In addition to time alignment, I recommend excess phase correction to remove/shape low frequency rooms refection's (we all got em) so that the timing response is identical for all speakers "over time." If you are talking about not using digital xo's between the sub(s) and mains, yah, not the best way to go as integration is very difficult and the sound is typically unclear in the bass. Blurred is how I would describe it. With each speakers frequency and timing response tuned, makes for not only a highly transient system, but the bass literally "sits in the pocket" with a rock solid phantom center image.
  9. Re: time alignment. Sound travels roughly 1 foot per millisecond. Even with my subs located in the same horizontal plane, they are over 3 milliseconds behind the mains. I can tell if they are not time aligned or not with the mains fairly easily. It sounds like bass overhang. or a thickening of the bass sound. Like you hear the click of the beater against the bass drum and then the oomph after. Time alignment makes for a much sharper transient impact. It is easy to test too. I can make FIR filters with and without time alignment, keeping all other variables the same. Then using HLC, I can A/B level matched filters and pick out the difference. I made a YouTube video about it. Yah, this is not the same thing. If you were to delay an audio signal by 160ms, you would hear a discrete echo as 160ms translates into roughly 160 feet of sound travel. Or put another way, if you delayed your sub by 160ms, it would be the physical equivalent of 160 feet away... Here is a quick video for folks that want to tune into hearing even short delays of 5ms or 3ms that is audible. While this is for music production, it is a great way to tune one's ears to know what to listen for.
  10. Great topic @ecwl The most important aspect I have found in integrating subwoofer(s) is time alignment, And unfortunately, it is not easy to do. The reason why is due to the long wavelengths involved. There are a couple ways of doing it manually with REW and Acourate for example, but Audiolense provides an automated approach which works extremely well. I think subs sometimes get a bad rap because of lack of time alignment and eq, as the sub operates in the room's modal region, which can mean large peaks and cancellations. Unfortunately, passive acoustic treatment below 100 Hz is all but useless in a home environment, so the best bet is DSP and ideally DSP that has excess phase correction at low frequencies. This is what makes for "clear" sounding bass. In the case of Chris's system, each speaker, including sub are time aligned. Further, Audiolense has bass management so digital crossovers are used to offload low bass to the sub. One can choose at what frequency the crossover kicks in at on a speaker by speaker basis. So while Chris's system is 12 channels, there are actually 23 convolvers required. 12 for the direct channels and then each channel (except sub) has a digital crossover and routing/summing to the sub. This results in a much cleaner sound where one isn't taxing the lower limits of each speaker. I look at the speakers specs and in-room measurements gives us a great guideline where to cross to the sub(s). In my case of a stereo triamp system, I run 2 x Rythmik F18 subs in stereo beside my mains. Looking at the frequency response of the mains gives me a good idea of where to cross the subs at and in-between room modes for the best integration. Subs are time aligned and with 1800 watts of power to the dual 18" subs and +1000 watts to the mains (2 x 15" woofers working below 600Hz) is enough for some serious impact, especially when all drivers are time aligned. Subs add "weight" to the sound and it seems you can never have enough :-) I remember one of the concerts I attended at a young age was seeing Jeff Beck and Jan Hammer in my sleepy little town. They were using Community Light and Sound "boxer" bass bins that blew my mind to feel the kick drum in the stomach and bass vibrate your body. And that was before live sound went stupid loud. Subs can be hard to integrate (time alignment and eq required) but certainly can elevate ones system to the next level of musical enjoyment.
  11. Hang Loose Convolver (“HLC”) is now available on Raspberry Pi4. Requirements: Raspberry Pi4 4GB (2GB is likely to work). 64 Bit OS Debian version 11 (bullseye). Performance: Process 32 channels of convolution using 65,536 tap length FIR filters at 48 kHz sample rate. Example playing a 7.1.4 Dolby Atmos (already decoded) music file using a VST3 plugin AudioFilePlayer. HLC is configured for 12 channels with 2 channel I/O being summed: While this example is using a 7.1.4 (12 channel) file being played, the FIR filterset has digital XO’s and bass management built in. So, with 12 channels of direct signal and 11 channels of bass offloading means 23 channels of discrete convolution is being processed. With 23 channels of convolution processed there is still considerable CPU headroom and buffer size left. Note: HLC is a zero latency convolver, meaning no signal delay is added by the convolution engine. Therefore, one can process 65,536 tap minimum phase FIR filters immediately with no added signal delay. This is good for situations where lipsync is required but you still may want high-resolution FIR filtering capability of 65,536 tap length filters. HLC comes with HLConvolverHost, which allows you to plug in virtually any VST3 plugin for additional processing. The simple audio settings dialog allows you to easily choose inputs and outputs, sample rate, and buffer size so you can be up and running in minutes. Updated Operations Guide.
  12. Hi Deric, Frequency resolution = fs / N where fs is the sample rate and N is the number of filter taps. So a 65,536 tap FIR filter at 48 kHz has a frequency resolution of 48000/65536 = 0.732 Hz. The frequency range spans 0Hz to 24 kHz (i.e. fs/2). So thinking of a FIR filter as a graphic equalizer: 24000/0.732 = 32,768 sliders for our FIR equalizer. Remember 1/3 octave (i.e. 31 band) eq's? Our FIR example has 1000 times the frequency resolution of a 1/3 octave equalizer. A rough rule of thumb is that the effective low frequency limit of the filter is to multiply the frequency resolution by 3, which is 3 x 0.732 Hz = 2.2 Hz. There is not much to gain (if anything) by using a 131,072 tap filter as the 65,536 tap filter is well beyond our ears frequency resolution and low frequency limit. One can certainly try lower/higher tap counts, but 65K seems to be the sweet spot for DRC. You could try generating the same filter at different filter tap lengths and import the correction filters in REW and look at the frequency responses. You could also load them up in HLC for example and compare level matched filters of various tap lengths to see if you can hear a difference...
  13. @dericchan1 yes, stick with 24/48 kHz as there is no value going beyond. As @The Computer Audiophile says, the mic cal file is good to 20 kHz.
  14. A couple of updates: HLC Linux version available for download. Please send me an email if you wish to target a different distro than Ubuntu. Most Digital Audio Workstations (DAW)’s support “automatic delay compensation” when using VST3 or AU plugins that have latency. The idea is that the plugin reports its latency to the DAW and the DAW compensates for the latency. So when tracking or producing audio for video postproduction, the tracks and/or video is delayed by the number of samples reported by the plugin so that the audio lines up perfectly with the video. While HLC is a 0ms latency convolver, some FIR filters have inherent delays, like if using linear phase FIR filters for example. The latency changes based on; type of FIR filter used, sample rate, number of filter taps, how much excess phase correction has been applied, and if using digital xo’s, whether min or linphase. HLC now reports the FIR filter latency to the host for automatic delay compensation: A 131,072 tap minimum phase FIR filter with minimum phase digital XO reports 0 latency samples: A 131,072 tap minimum phase FIR filter with linear phase digital XO reports 5,540 latency samples: A 131,072 tap linear phase filter with linear phase digital XO reports 37,863 latency samples: And if the sample rate changes and HLC does not find a matching FIR filter for that sample rate, then the FIR filter is resampled to maintain its frequency resolution and reports the new latency samples: Great for DAW’s, but what about consumer applications? Working with the folks at JRiver, the latest version of JRiver now supports automatic latency compensation. This allows one to use full tap length linear phase FIR filters with excess phase correction and not have any lipsync issues while watching movies. If folks can think of other consumer (or pro) applications that support the plugin model, please send me an email. I can work with the developer to implement automatic latency compensation. Happy listening!
  15. Hi @cfisher I agree. I listened to the SR-1b FIR filters in Roon's convolution engine, HQP, JRiver, and HLC. I could not hear a difference, I wanted to take it a bit further and measure Roon's convolution engine to see if the filter is being corrupted in anyway, The short answer is no. I used REW to generate a "sweep" file that could be played in Roon. A 5 Hz to 22,050 Hz, 44.1 kHz sweep file was played in Roon with no convolution (or any other DSP) applied. I used BlackHole as a virtual loopback driver to route the output of Roon back into REW's input to be measured. This is a "control" test to see that we get the "expected" flat frequency and phase response: Sure enough, perfectly flat frequency and phase response. Next I loaded a "test" high resolution headphone FIR filter at 65,536 taps into Roon's convolution engine. I noted that while the sweep was playing Roon displayed the "bug" of 22K taps: Of course, it should be displaying 65,536 taps or 66K in Roon speak. And the measurement: I made the test headphone filter complex to test any inconsistencies. Lets compare using another convolver. I setup a test where the output of Roon is going into BlackHole, but the output of BlackHole is going into the input of Hang Loose Convolver (HLC) and the output of HLC, using another virtual audio driver called Ground Control, routes the output back into the input of REW. I loaded HLC with a Dirac pulse 65,536 tap FIR filter, which is a "do nothing" FIR filter so we can see that again, as a control test, we get the expected flat frequency and phase response: Sure enough. Now loading the same test FIR filter, we see HLC correctly reporting 65,536 taps: And the measurement result: Looks the same as Roon's convolver test. Lets make sure by overlaying the results. Frequency response: Identical. Phase response: Identical. Conclusion: While Roon may have a convolution bug of some sort, it is definitely not affecting or corrupting the frequency or phase response of these high resolution headphone FIR filters. Back to listening to music.
×
×
  • Create New...