Jump to content

Keith_W

  • Posts

    442
  • Joined

  • Last visited

  • Country

    Australia

Retained

  • Member Title
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I should point out that Toole thinks that we should correct below Schroder. Above Schroder, he says it is OK to apply broad tone controls, but he thinks that applying correction to a target is the wrong approach. He is essentially arguing against more granular correction - after all, a "broad tone control" is the same as a very loosely applied target curve. When you make your correction, it is trivial to select how granular you want your correction to be. You can make it really hug the target curve, or you could make it sort of "trend" towards the target curve (the Toole approved approach), or anything in between. The granularity of the correction is a continuous variable. Since it is so easy to do, you can make a range of filters and listen to which one you think sounds best. I don't have a firm opinion on this one, I do both. I can flip between the filters with ease, a push on the button on my convolver and I have a new filter loaded. And if you use HLC, the music doesn't even stop. Of course, making the on-axis response look perfect might mess up the off-axis response. So I regularly check what is going on off-axis as well. Same procedure, 45 deg off axis, and your choice of SPS or MMM. And BTW, I wouldn't dismiss the Magic Beans idea. Do you understand how it works? Those videos don't do a very good job explaining it, you really have to watch a few videos and aggregate all the information together to understand what the intention is. I posted a summary of my findings in that ASR thread, and joentell thought that it was an accurate description of the method.
  2. If you look further in that ASR thread you linked, you will see my experiments with Magic Beans. I do not agree with Toole's opinion, because I don't think he understood the objective of Magic Beans. Toole is famously against custom target curves that replicate the Harman curve. In that same thread he said it is not a Harman "target", it is a Harman "result", i.e. what the speaker naturally reproduces at the MLP if the speaker was designed to be flat under anechoic conditions. The idea of Magic Beans is to restore the anechoic performance of the speaker by removing the "room transfer function". It is not about artificially creating a Harman target. And even if it was, I have no objection to it, provided that speaker directivity is constant and it does not mess up the off-axis response. Toole also thinks that this is some kind of money making gimmick. This is not true, anybody who has DSP software which has the ability to manipulate curves is able to copy the workflow and implement Magic Beans correction for free. I posted an Acourate workflow in the same thread. That experiment did not cost me a cent. In any case, I have said upthread that I have no business arguing with Toole. I personally think he is incorrect on this point, but you should remember that my opinion carries no weight. I am an amateur hobbyist. He is Toole. Having said that, I am under no obligation to do everything he says with my own system. Unfortunately, my speakers are not a candidate for Magic Beans correction. This is because they have mixed driver types, with horns >500Hz and normal woofers/subs below. I have measurements that demonstrate that the output from the horns does not decrease in volume with distance, whereas the woofers/subs decrease as expected. This produces an upward tilting curve at MLP, which is the opposite of what nearly every other speaker type produces. The Magic Beans correction actually does correct the nearfield response to flat (indicating that I followed the procedure correctly), but it produces an upward tilting response at MLP. After I gave my feedback to joentell, he implemented a "directivity detect" feature in his app. There is a detailed analysis of my speakers on ASR under my system thread. It does not behave like a normal speaker, which is why the DSP solution is different to a normal speaker.
  3. Welllllllllllllllllll that is a bit of a can of worms. If you read what Toole says in his book, he says that good speakers should have two properties: (1) they measure flat under anechoic conditions, (2) they have constant directivity. If you place such a speaker in a room and listen farfield, you will obtain a Harman-like curve (there is a good video by Erin on Youtube that explains why). Toole has said himself on another forum that equalizing a speaker to reproduce the Harman curve at MLP is wrong, because an on-axis correction also affects the off-axis response, which will produce reflections which are spectrally incorrect. There is no "choose your target curve based on your preference". Toole's motivation is to narrow the "circle of confusion" - have the studios produce music mastered through standardized sound systems, played back in our homes using speakers designed to achieve certain standards of performance. The recordings have to be mastered on systems so that they are faithful to the original sound, and have to be played back on systems that reproduce the sound of the master faithfully and accurately. Only then can we have "accurate sound". However, in the real world, even studios can not get something as basic as the frequency response correct. Genelec did a study using their GLM tool, which is a calibration tool for their speakers. They observed a wide variety of frequency responses in studios. And this is only for studios with Genelec speakers, who bothered to pay extra for the GLM tool. In reality the variance is probably much worse than that Genelec study. I would argue that this gives me license to adjust the frequency tilt as I please. For each recording, if necessary. So, like you, I have gone for a preference target. I am not an authority figure, I am merely an amateur hobbyist in an ocean of amateur hobbyists. I have no business arguing with Toole. Or Mitch, for that matter. BUT ... sometimes authority figures disagree, leaving us minnows confused. So I read what they say, try to understand their points of view, and make up my own mind. After all, the sign of an educated person is the ability to entertain contradictory points of view and weigh them up fairly. BTW I recently came across a new method for generating a target curve that removes the "room transfer function", restoring your speaker to a flat anechoic response. You perform a nearfield MMM of your speakers, then perform a MLP MMM. The idea is that the MLP MMM captures the "speaker + room" response, and the nearfield is speaker only. If you subtract the nearfield MMM from the MLP MMM, you will obtain the "room transfer function". If you set this as your target curve, it will correct the nearfield response to flat. I have tried this, and it works for frequencies above transition (i.e. 4x Schroder, about 440Hz in my room). If you want to learn more, google "Magic Beans room correction joe n tell". You will find some videos.
  4. I use Ravenna. I am sold on its benefits. For me, the main benefit is that I do not need to look for a 16 channel DAC. There are very few of those around! Instead, I can buy two 8 channel Ravenna DAC's and a microphone interface. Ravenna ties all the equipment together and tells the PC "I am a device with 16 DAC channels, 4 microphone inputs, and 8 digital inputs" (or something like that). You can put together as many channels as you want. The problem with Ravenna (and also Dante and AVB) is that they are pro audio standards. Not easy for us amateur hobbyists to set up. I already find my RME intimidatingly difficult, let alone Ravenna which adds the complexity of network audio and multiple modes into the mix. I keep telling myself that I am a home audio enthusiast. I am not running a broadcast studio, or routing audio in a stadium, or an airport, or any situation where network audio would be a massive advantage. But now that I am starting to run out of DAC channels, I appreciate the flexibility of Ravenna. Someone who is into immersive audio with more speakers than me would see more of a benefit. BTW, aren't you the guy who blew up his speakers when trying out the Merging Anubis? ;) Not easy to set up, are they.
  5. Yes Mitch I know that Acourate is not for everybody. I know someone who is putting together a 5.2 system and asked me to help him correct his system with DSP. He wanted to use Acourate. I strongly advised against it, my recommendation was Audiolense + send you an email. I go through that time alignment procedure for my 8 channel system (2x active 3 way speakers + 2 subwoofers) and it takes me hours to do it. So I was pretty impressed when a friend who used Audiolense came over with his laptop. 30 minutes was spent explaining my system architecture to him, downloading drivers, performing channel checks, and all the usual futzing around before a single sweep can be done. But when he actually got to it, the whole thing was done within 15 minutes, from crossover generation to usable filters and verification measurement (had to be done in REW, because Audiolense can't do verification measurements?). I was amazed when I saw the step response, it was textbook perfect. Not that I can't get the same result, but despite my proficiency, it takes me a long time. Many years ago, DSP correction via Acourate/Audiolense was much less well known and it was difficult to get help. This was even before your book came out. I posted a question asking about it in another forum, and a very kind member rang me from the USA to talk me through it. I have never forgotten his help and I am grateful for as long as I remain in this hobby, which will stay with me as long as I have intact hearing! I hope to do everything I can to help people see the benefits of DSP, in the same way that he helped me. I am not partial to one software solution over another, I think there are different advantages and disadvantages to different software packages that might suit some folk more than others. Acourate has a very "Teutonic" approach which is both good and bad depending if you are the kind of person who enjoys doing your own car maintenance. When I first bought your book I thought that I am way out of my depth here, but after 8 years and multiple re-readings, I have gone on to look up all the references you cited and come to my own conclusions. For example, you recommend measuring without the sofa. I asked myself why I should do that, when the listening sofa is always at that position? So I performed the experiment - measure with and without sofa, correction with/without sofa, and verification measurement with/without sofa. There is a noticeable difference measurably and audibly. Conclusion: measuring and correcting without sofa and then performing the verification measurement with the sofa in situ messes up the correction, but it actually sounds better. Another example: Toole says that the Harman target is not a target, it is the result of putting a speaker that measures flat under anechoic conditions in a room, this will naturally roll off the higher frequencies. In your book you suggested choosing a target to preference. So I tried Toole's suggestion, I knew that limitations of measurement means that any freqs < 425Hz (transition zone in my room calculated from 4x Schroder) would be meaningless, but that is OK because I was planning to use a different bass correction strategy anyway. I corrected the nearfield response to flat, then applied correction < Schroder. To my surprise, verification measurements showed a rising treble response at MLP instead of a falling one! Anyway, after a lot of investigation it turns out that the directivity of the horns was causing them to behave differently to the more omni woofer. So what does that say about Toole? I think it may not apply to horns, although I am not brave enough to say that to him ;) My system, I do what I want, and I use my own target curve, as per your recommendation in your book. Anyway, to other readers of AS: DSP is a really worthwhile pursuit. I think that in 2024, every system should have DSP. I would go further than Chris and say that anybody who refuses to consider it is stuck in outdated thinking and misplaced priorities.
  6. I think that DSP is the third most important component in my system, only coming behind the room and the speakers. It is even more important than what type of amplifiers I use, or what DAC. This is because DSP can make a massive difference to your system's performance. Unlike you, I have chosen Acourate which IMO offers some advantages over Audiolense but also some drawbacks. Acourate is not for everybody, it lacks the automation of Audiolense, and you are forced to make a lot of decisions where Audiolense does things automatically. Almost anything can be done with Acourate, you just have to figure out the workflow and what tools you are going to use. For example, you can independently adjust bands of phase of each subwoofer so that they do not cancel. You can create a Virtual Bass Array (VBA). And then there is my latest experiment, which is using a time delayed speaker pointed in the opposite direction of my main speakers (i.e into the front wall) so that I get time delayed and attenuated reflections which greatly enhances the perception of spaciousness. There are special crossover types, like the Horbach-Keele crossover which allows you to phase steer the audio beam if you have an MTM speaker. I have seen Audiolense in action and I am very impressed. For 90% of people who need DSP, Audiolense will do everything you need, and do it quickly and easily. An in-depth understanding of DSP is not needed. But if you are a DSP nerd like me, and you like to play, then Acourate is a better choice. It does come at a massive cost though - the learning curve is substantial, the workflows are unnecessarily inconvenient (e.g. there is a limit of 6 curves that can be loaded at a time), and the options you are presented with are a bit opaque. There is very little automation in Acourate, and for something like time alignment (which is done automatically in Audiolense), Acourate makes you go through a manual process of measuring each driver independently and looking for peaks in the graph to determine the time delay. I have used Acourate for 8 years now, and I am reasonably proficient at it. There is a lot to be said about the benefits of automation offered by software like Audiolense. It reduces the potential for human error, which is the number 1 reason why people get unsatisfactory correction with DSP. If you choose the manual method, then you have a lot of learning to do. I have had to learn about signal processing theory (e.g. what is the difference between minimum phase, linear phase, excess phase, FIR vs IIR filters, etc), room acoustics, psychoacoustics, speaker crossover design, and much more. I consider this time well spent. After all, if you are an audio nerd you should learn about these things. Over the years, I have learnt new ways to measure, developed new philosophies on room correction and target curves, come up with interesting experiments (most of my experiments are hare brained and they do not work!). It has been a really fun journey, and I am still on that journey. The other part of my journey is having fun with VST's. You briefly touched upon VST's in your article. I have played extensively with VST's. Nearly all of them offer a free trial period, or are outright free. Most VST's are professional tools intended for use in a DAW, but will work in JRiver. I regularly use uBACCH, a Pultec equalizer (gives you the famous "Pultec punch") and a VST to add harmonic distortion that allows me to mimic the warmth of a tube amp if I wanted to. I can say without hesitation that DSP has completely transformed my system. The clarity of this system is unequalled. The dynamics are amazing, all the wavefronts from all the drivers arrive simultaneously with millimetre precision and it is extraordinarily lifelike. uBACCH throws the soundstage really deep, way deeper than the front wall and it can extend to slightly behind you ... from only two front speakers. The effect has to be heard to be believed, and it is simply not possible in any system without DSP. At the same time, the system is chameleon-like, I can totally transform the sound of the system at the push of a few buttons. About the only aspect of the system I have no control over are the physical limitations baked into the design of the speaker, e.g. the directivity pattern. My advice to anybody who wants to use DSP or take up DSP as a hobby is this: get help, but learn to swim. Yes, you can hire Mitch and you will quickly get a set of filters that I am sure will sound great and make you very happy. Your next step is to try to beat him ;) I make new filters for every slight change in my system, e.g. if I change the furniture or move the sofa. I recently decided to check the directivity of my horns and compare it to the conventional woofers, then decided to redo the crossover to a shallower slope, allowing a more gradual blend between woofer and horn so that the directivity does not change so suddenly. Hiring a service like Mitch should be seen as a starting point and not the end point. Knowing how to do this myself has saved me a lot of money.
  7. Does the 2024 version of Dirac Live still use Mixed Phase filters?
  8. OK! So the 32 bit version of JRiver worked. The Ambio one VST plugin sounds really strange to my ears though. The tonality is noticeably different, it seems to depress the midrange and make my system sound a bit like a karaoke machine. Although the soundstage is wider, soloists seem to be stretched out across the space. My speakers are set up at for standard equilateral listening. I have NOT yet repositioned them to the 20deg angle recommended by Ambiophonics. To do so is a major undertaking because they are extremely heavy monsters. I will try it later and report my results.
  9. Thank you for your response. I downloaded JBridge and got it to convert Ambio from 32 bit to 64 bit. I went into JRiver to install it, but it promptly crashed. Every attempt I made to get it to work crashed JRiver. I was unable to access JRiver's DSP studio to remove the plugin, so I uninstalled JBridge. That allowed me to get back into JRiver and remove the now orphaned Ambio plugin. Since it's so badly behaved I did not want to try to purchase it. So I looked to see if there were any other 32 bit to 64 bit VST converters. I found this page, but some of them were either unavailable or required purchase without trial. I emailed Soundpimp yesterday requesting a trial of their software, but I have not heard back from them so far. We'll see what happens. Neutron Media Player appears to be Android only. I am using Windows 11. I will try your other suggestions - 32 bit JRiver, and do some googling.
  10. Sorry for the gravedig. I am keen to try Ambiophonics on my PC running JRiver, but I have hit a few snags. First, the paid version - AmbiophonicDSP VST, available here from electro-music, seems to have some issues with buyer satisfaction. Namely, their forum has multiple people complaining that they paid for the plugin and did not receive it. Apparently, the resolution process is to complain on the forum, then an admin will ring the guy and remind him to fulfil his obligations. Also, I attempted to purchase the plugin (before I read their forums), and their shopping cart does not work. So, even if this was possible to purchase, there is a chance your purchase may not be fulfilled. Next, the free version - Ambio, is almost impossible to find for download. I presume the author of the plugin is Weldroid - if you go to that page, you will find two download links. Both of them don't work. Nevertheless, after a bit of googling I was able to download Ambio from here. That was when I encountered my next issue. JRiver (I am using the current version, MC30) doesn't recognize it at all. Reading the Weldroid blog page, it appears that the Ambio plugin is 32 bit and may not work well with 64 bit VST hosts. There are VST wrappers for Foobar2000 and WinAmp, but not for JRiver. The links given for the Foobar and WinAmp wrappers do not work either. One way around it would be route output through software like Audiomulch, but that costs USD$189. I downloaded the trial version of Audiomulch, which was able to load the Ambio VST plugin, but unable to send live audio to it from JRiver. So after all that, the end result is: unable to use Ambio. Is anybody still using it in 2023? How did you get it to work?
  11. That is a great article Jud. In fact I was reading an article on why music producers should choose Tidal over Spotify. They included a chart that shows how much an artist makes on each platform: Moral of the story: if you want more money in your favourite artist's pockets - choose Tidal.
  12. Keith_W

    HQ Player

    Miska, at the moment I am using an older version of HQP and I am planning to revamp my entire setup, which may include purchasing another license for HQP if it offers this very important feature for me. I used my older version of HQP mostly as a convolution engine - it sends 8 channels of digital out to my 8 channel DAC which then runs my fully active system. Essentially, HQP was my crossover and my system is unable to run without some kind of convolution engine. Because this older version of HQP is unable to accept any digital input, I was unable to do things like stream audio through HQP, listen to Youtube videos, send REW test signals through it, etc. Or even use any front end I choose (like Kodi) to send audio through HQP for processing. Right now my concern is whether HQP is able to stream Idagio since it lacks a digital input. Does your current version support that? I noticed you had a Facebook post where you streamed Apple Music through an iPad into an AES/EBU input on your PC and you were able to use HQP as your convolution engine. I had to purchase Acourate convolver to do the things that I wanted. However, Acourate Convolver does not do DSD or upsample. BTW you would spend less time having to answer these questions if you had a FAQ or Wiki on your page.
  13. Hi JeroenD. I hope you have solved your issue by now. Basically, the IACC compares the output of the left speaker to the right. If the output that reaches the microphone is 100% the same between both channels, you get an IACC of 100%. Off the top of my head, here are a few reasons why your IACC might be off: - microphone not centred. - there is a problem with your speakers or electronics (make sure you check this first!), e.g. I had an output valve on my amp that was on the way out which I did not realize until I comprehensively checked. - speakers placed asymmetrically in the room - strange room shape / one side has walls and the other has glass or bookcases, or open, etc. - there is a problem with your correction filter - you moved the microphone between taking measurements for the correction filter and taking the IACC measurement (if you repeat the measurement a few times with the microphone centered at various positions front to back, you will find the IACC will vary). There are probably a few more. But don't get too hung up about it, if it sounds right then ignore it.
  14. Sorry for the very late reply. I don't look at this forum very much these days. The answer to your question: you will need a DAC for every channel that you drive. For example, if you have a pair of speakers which have four drivers each, you will need 8 channels of DAC (i.e. four 2 channel DAC's). Having said that, it is a very bad idea to use four 2 channel DAC's because all their clocks need to be synchronized / slaved to a master clock. There are not many DAC's which accept an external clock, and even if you find them the cost might be prohibitive. It is easier and cheaper to use an 8 channel DAC in the first place.
×
×
  • Create New...