Jump to content
  • Archimago
    Archimago

    MQA: A Review of controversies, concerns, and cautions

    Editor's Note 1: MQA ltd was sent a copy of this article several days prior to the scheduled publication date. The company requested a phone conversation, which took place earlier this week. MQA was encouraged to write a response for inclusion with the article below, but it respectfully decline to submit a formal response. 

     

    Editor's Note 2: The author of this article is writing under a pseudonym. While he is unknown to the readers, his identity has been verified  by Audiophile Style. He has no vested interest in the audio business, other than being a consumer of music. 

     

    Editor's Note 3: The technical assertions made in this article have been thoroughly checked by independent engineers, both in and out of the audio industry. To the best of our knowledge everything technical in this article is factually correct and may be duplicated at any time by anyone with the requisite skills. 

     

    - Chris Connaker

     

     

     

     

    MQA: A Review of controversies, concerns, and cautions.
    February 25, 2018
    Archimago, for Computer Audiophile

     

     

     

    “Controversy is only dreaded by the advocates of error.” 


    Benjamin Rush

     

     

     

    I want to thank Chris for reaching out and giving me the opportunity to post an article on Computer Audiophile about MQA. As you perhaps are aware, over the last 3 years, I’ve been posting various findings and impressions about MQA in my blog, Archimago’s Musings. Furthermore, I appreciate Chris’ willingness to allow me to post this under my pseudonym @Archimago. I know there are perceived issues with anonymous postings, I commented on this in the forum here if you want to read more about my rationale.


    If you have read my writings on MQA, obviously, I have been a critic and have expressed on a number of occasions some problems I see with this data format and overall “system” of audio playback. However, I believe I have been reasonably diplomatic in using proper etiquette while expressing concerns and criticisms. In my opinion, we can certainly examine the merits and failings of a data format without the need to get hysterical or personal. I hope you will find my tone in this article to be reasonable.


    My intent for this article is to provide a relatively broad but detailed overview. When appropriate, I will include links in the body of this article and footnotes below for further reading. I will embed a few images for reference realizing that charts and graphs can also be found elsewhere and perhaps in more detail. With the power of Internet search engines at our fingertips, numerous subjective opinions and results of objective tests are readily found elsewhere also. The core of what I’m interested in discussing in this essay is the simple question: “Why has controversy surrounded MQA to this extent?” By the end of this, I hope most readers will be essentially “caught up” with the discussions and debates surrounding MQA among audiophiles. As always, ultimately you decide whether you think MQA is worthwhile.


    While the question above may be easy to ask, the answer is multifaceted and more difficult to express thoroughly given the complexity and nuances of a system with multiple parts incorporating a number of ideas. Considering the volume of back-and-forth arguing found here and elsewhere, it appears that MQA has touched a nerve at the core of the audiophile hobby.


    Looking at the extent and the expense audiophiles go through to achieve high quality playback, we can say that the audiophile pursuit is one of trying to achieve an ideal; we even see the phrase “perfectionist audio” employed to describe this hobby. “We” seek the highest fidelity sound quality in audio playback and typically approach it with great passion (1). With MQA, the company must realize right from the start that they “threw their hat into the ring” to be debated and dissected when it was marketed to audiophiles through the mainstream audiophile press (2). Since then, at least every few months, MQA has featured prominently in the audiophile magazines with articles claiming significant audible benefits (3), and regularly (if not incessantly) mentioned in digital hardware reviews as a desirable new feature (4) in the last few years.


    Although there are likely other factors involved, let us focus on three major areas of contention:

    1. MQA takes aim at a foundational level positing itself as a viable and “desirable” format.
    2. MQA tries to position itself as sounding “better” than what we currently have.
    3. MQA over-reaches the role of a traditional data format and aims to be a “philosophy” with DRM concerns. 

     

    Let us explore each of these in some detail.

     

     

    1. A foundation for a “desirable” format? Was there a need?
     

    As we near the end of the second decade into the 21st Century, most of us are familiar with multiple media distribution formats whether in the audio or video we consume. If we consider just the digital audio world, in the 1980’s, we were introduced to the CD, by the mid-1990’s audiophiles heard about HDCD if not owned a decoding device, by the turn of the century, SACD and DVD-A battled it out for high resolution dominance. While more SACD titles have been released, neither truly captured the public’s fancy. By the end of the 2000’s with the Blu-Ray format we have seen “audio only” Blu-Ray disks. Yet again, as with SACD and DVD-A, the physical high resolution formats have barely made a dent in the music marketplace.


    While physical media for digital audio floundered over the last decade, with general-purpose computers, the consumer has learned to be “agnostic” about how the data itself is packaged. Since the early 2000’s, “computer audio” has seen massive growth among the public and among audiophiles. CDs can be “ripped” easily and perfectly, commodity computer hardware can be assembled to build multi-terabyte media servers, generations of DACs have been manufactured to allow high-resolution playback, and ubiquitous network technology has allowed streaming through the home and across the Internet. Whether the music data is encoded in MP3, AAC, FLAC, WAV, AIFF, DSF, DFF, etc. matters not because with the right software, any format can be generically decoded freely so long as the encoding is open and accessible.


    With this wealth of audio media encoded in an open fashion, numerous software playback options, including several free or “open source” options exist. One also has the freedom to choose from a multitude of hardware (not just computers and DACs, but cell phones, digital audio players, audio streamers, home-theater receivers). For those willing to invest some time, one could go even deeper and explore sophisticated fine-tuning of playback with DSP techniques for example. The market has also provided entrepreneurs with opportunities to create turnkey server and playback systems to cater to different needs and at various price ranges. In a world of freedom and innovation, we see MQA aiming to disrupt the status quo as a new “format” to succeed over others; one that the company insists is capable of ideal universal utility, the one format that music labels can use to “guarantee delivery of the Studio sound”.


    Let us take a step back and consider… Prior to the introduction of MQA, was there a collective desire among audiophiles for yet another data format to fulfill service gaps? Were there many audiophiles or music lovers requesting that their DACs have an “authentication” indicator? Did many people complain about major format incompatibilities (other than maybe iTunes with FLAC)? Were there complaints either among the consumers or in professional circles that high-resolution PCM and DSD sounded suboptimal? I think many would answer “no” to each of those questions. Many times, I have seen MQA described as being “a solution in search of a problem”.


    The company Meridian initially targeted MQA as a data format for “high resolution” streaming (5) to capitalize on the growth of streaming services (were there even many consumers asking for high resolution audio streaming over the Internet?). The claim being that MQA would reduce data rate to something more manageable than native high resolution PCM and at the same time deliver better-than-CD quality sound (6). Certainly, this is a worthwhile goal as an engineering exercise, but there are many ways of achieving this without actually creating a new proprietary data format. For example, we can already achieve bitrates similar to MQA with higher-than-CD quality using “free” and open file formats like FLAC. How about lossless compressed 18-bit 96kHz FLAC as described by Miska? In fact, using the same data bitrate as MQA, would most audiophiles streaming music not be satisfied with simply lossless compressed 24/48?


    Though streaming might be the prime target for MQA, aspirations appear to be even broader (we will talk more about this in part 3 below). Music download sites have been willing to sell these files (7) and over the years, we have even seen claims of MQA-encoded CDs being sonically beneficial (8).


    While the company has made the MQA data format “compatible” with standard PCM playback (9), they claim that when properly decoded, whether through computer software or using a compatible DAC with the appropriate firmware, the sound quality will be at the level of the “original” high resolution master source (which could have been at 24/192 or even higher like DXD 24/352.8). This leads us to a second major point of contention…

     

     

    2. Is MQA sonically “better”?


    Among the multitude of music lovers out there, audiophile hobbyists are those who most desire progress in sonic fidelity. If there are benefits to be gained, “we” will typically be the ones most interested in incessantly and passionately exploring the potentials and possibilities. Perhaps this was the rationale for why MQA was so strongly promoted to the audiophile press who then proceeded to “push” the product among hi-fi consumers. However, we must remember that the audiophile hobby itself is deeply divided among participants around epistemic authority (ie. how do we actually figure out what is truly valid improvement and progress in sound quality?). There have even been papers written about the claims to knowledge and the tensions that exist between the “objectivists” and “subjectivists” (10).


    Within this climate of epistemic tension, MQA heightens the strain by challenging established sampling theorem with claims that it “goes beyond Nyquist/Shannon”, provides no objective evidence that it surpasses current capabilities, and even worse, independent objective evaluations have demonstrated that MQA appears to degrade quality as we will soon discuss. Furthermore, the majority of strong positive testimonies in support of MQA seem to be coming from those who have a relationship with the Industry (either personally or out of mutual financial interests), typically are more committed to subjective-only assessments, or some combination of both.


    From the start, MQA insisted that their techniques achieve “studio sound”. They also claimed that it’s “lossless” yet “compatible” with current playback systems. The claim is immediately hard to accept given the implications of the data reduction. How is it that something can be truly high resolution lossless, be backward compatible, and do it with even fewer bits than already-efficient compression algorithms?


    Though not necessarily the exact inner workings of today’s MQA encoding system, the patent diagram from December 2013 gives us a valuable glimpse into the nature of the scheme:

     

    Figure-7A.png

     

     

    For all to see right on that diagram is the fact that this system gives preference to a certain number of most-significant “baseband” bits (the top 13 bits or so in the block diagram on the right) in order to achieve the playback compatibility. Then it incorporates a lossy component within the encoded lower bits (“sub-band” bits). Though MQA never admitted to it and audiophile magazines never acknowledged this obvious fact until recently, the system is no doubt “partially lossy” (11).


    Without using a decoder, digital subtraction test comparisons between MQA files and original PCM sources do seem to achieve about 13-bits average correlation null depth, which probably means something like 14 or 15-bits of audio quality if we throw in another bit or two for noise-shaped dithering. In early 2017, after the release of software MQA decoding using Tidal, I was able to compare songs that appeared to be from the same master and demonstrate correlation down to ~14-bits in one track, with portions down to ~17-bits once “unfolded” (12).


    These results are certainly in line with comments made in MQA interviews that potential bit-depth accuracy has been reduced to less than 24-bits (13) as implied by the block diagram. The exact amount of resolution varies depending on how the music was encoded.


    Beyond bit depth, another area of contention when assessing MQA’s sound quality comes from the company’s claim of achieving temporal accuracy; the famed ability to perform “de-blurring” on the music. There are many claims around this including using “neuroscience” as the rationale with the often-quoted value of 5µs threshold of human temporal auditory resolution (14). Whether it’s within articles online or in MQA’s marketing material (15), it is typically suggested (but not always) that filtering is a significant part of the technique used to improve time-domain performance.


    Over the years, we have come to discover the nature of the MQA filters themselves thanks to some fantastic work by Måns Rullgård and his exploration around deciphering the “rendering” stage of MQA. Using his insights and software, I posted the various MQA filter impulse responses with the AudioQuest Dragonfly in July 2017.


    For this summary article, let us look at the impulse response of the “prototype” MQA digital filter that is applied commonly during decoding and upsampling, found among a number of tested MQA DACs:

     

    MQA-Impulse-white-background.png

     

     


    There are some problems with this filter from the perspective of high fidelity playback; I’ll just show you a few issues here. First, it is extremely weak and does not suppress imaging (or “up aliasing” as I’ve also seen it called) well. We actually can show this effect quite prominently when looking/listening to MQA-encoded music that began life as 44.1 or 48kHz. Very obvious examples are pop recordings such as this Bruno Mars album below originally of 44.1kHz sampling rate, fed into the MQA encoder and then unfolded to 88.2kHz within the Tidal software. (I first became aware of this issue when I came across this YouTube video awhile back with Beyoncé’s Lemonade.)


     

     

    Bruno-Mars-Imaging.png

     

     


    Notice that the actual music is filtered off below 22.05kHz (Nyquist frequency of 44.1kHz sampling rate), there is a gap present due to the filtering, then the very obvious imaging artifact is easily seen in the top octave above 22.05kHz (like an attenuated mirror image). These frequencies should not be there and were not part of the “studio sound”. Yes, we are looking at ultrasonic effects here. However, if MQA is being marketed to audiophiles pursuing “perfectionist audio”, why is this obvious distortion acceptable?


    Secondly, the minimum phase filter design introduces temporal distortion by creating phase anomalies especially with higher frequencies as discussed recently on my blog. As a summary, here are the group delay graphs using the different filters available on the Mytek Brooklyn DAC, one of which being the MQA filter:


    Mytek-Group-Delay.jpeg

     

     

     


    Clearly, the MQA and minimum phase settings are not flat lines and they introduce varying amounts of group delay on playback especially with the higher frequencies. What this means is that given the same starting time, an 18kHz frequency component of the sound would actually be delayed by about 40µs compared to a 100Hz tone using that MQA filter on a 44.1kHz sample. Sure, we are only talking about microsecond differences, which would be significantly reduced with 88.2/96kHz material, but the point is that this was supposed to be a system that improved time-domain characteristics! If indeed the system is “de-blurring”, presumably they have some way to deal with the group delay introduced during playback. As far as I am aware, there has been no technical demonstration to show evidence of actual “de-blurring”.


    Thirdly, as a filter, tests have demonstrated that MQA’s processing (at least with ESS-based DACs like the Mytek Brooklyn and AudioQuest Dragonfly) seem to have a higher tendency to suffer from intersample overloading. Here is an example using the Mytek Brooklyn DAC again:


    Mytek-Filters.jpeg

     

     

     


    These are overlaid graphs of a 20kHz 0dBFS sine wave, wideband white noise, and the noise floor recorded off the Mytek Brooklyn DAC using 44.1kHz signals with the different filter settings. This kind of graph is often shown in Stereophile reviews as a way to characterize the effects of reconstruction filters (see description of the “Reis Test”).


    Notice the distortion introduced by the 0dBFS 20kHz tone in the form of multiple distortion peaks with the MQA filter. These are obviously artifacts of the reconstruction filter, likely created by overloading from intersample peaks. None of the other filter settings do this. This kind of behavior may be significant with modern productions where the average volume is loud and dynamic compression is high. Again, it begs the question as to whether MQA represents a step forward in high-fidelity reproduction, and whether a filter design such as this should be implemented broadly across numerous devices when clearly the other options here appear to be better.

     


    Finally, we can see the effect these distortions make with the Mytek Brooklyn DAC culminating in worse total harmonic distortion and noise (THD+N) than the other filter options for this device:


     

    Brooklyn-Filters-THD-N.jpeg

     

     

     


    Notice that this amount of gradually increasing THD+N starting below 10kHz which goes up to 1% at 20kHz is consistent with other MQA devices like the Meridian Explorer2:

    Explorer-2-THD-N.jpeg

     

     


    Unfortunately, on the Explorer2, one does not have the choice to switch to another filter.


    It would be unfortunate if MQA ends up being the “default” or only filter for a device given its relatively poor performance when playing standard PCM. Arguably, it would be unfair to compare standard PCM vs. MQA using this filter as good PCM playback would generally use settings with fewer distortions on a good DAC. In my opinion, DAC manufacturers that incorporate MQA need to make sure that MQA filters are not active by default. They should make the MQA filter easy to turn off if engaged, and the only time engaged is with actual MQA decoding/rendering.


    Realize that others have raised these concerns as well. To name just a few, Jim Lesurf took note of the “lazy filter shape” in June 2016 along with exploration of aliasing components. Bit-depth reduction and filter anomalies were identified in Xivero’s detailed “Hypothesis Paper to support a deeper Technical Analysis of MQA” (early 2017) where they went even deeper into the patent texts, discussed time and frequency-domain equivalency, and explored alternative compression schemes without the need for a decoder like MQA. Doug Schneider in SoundStage! reported on these anomalies; as far as I am aware, this is the only audiophile publication that has discussed and acknowledged the existence of these “false frequencies” in a timely manner.


    While there are a few other findings I can point out, I will leave the reader to explore these other issues elsewhere. Suffice it to say, there is no clear objective reason to think that taking a high resolution “studio master” file, running it through the MQA encoder which drops the actual bit-depth, and then decoding and upsampling using their weak reconstruction filters would result in higher fidelity playback through one’s DAC. And if there are objective explanations for how the sound can be made “better”, in my opinion, MQA is clearly not doing a convincing job explaining the technology despite their attempts with debatable charts, graphs, and impulse responses.


    It would be fair at this point to ask: “Is it possible that subjectively MQA is clearly better – like what the audiophile press wrote about when they heard MQA?”


    Unfortunately, apart from what seems to be limited, mostly closed listening sessions, I don’t know if MQA Ltd. has been “brave” enough to demonstrate A/B comparisons to broad audiences. In fact, it was rather disturbing that through 2016, the MQA audio show demoes consisted of simply MQA files being played without even comparisons with standard CD-resolution material (16). They even tried to explain this away in interviews. This in itself might not be as frustrating if it were not for the magnitude of almost euphoric claims from reviewers and magazine writers insisting no less than world changing “paradigm shifts” that would benefit the consumer.


    In mid to late 2017, I decided to try an “Internet Blind Test” using actual MQA Core-decoded audio (by capturing Audirvana+ output) with simulated MQA filtering using some demo tracks from 2L so listeners who can play 24/192 high resolution files can try to experience the difference MQA could make. Remember, “de-blurring” could have been demonstrated without special encoding or “origami” folding if MQA released 24/192 files with the “effect” baked in. With 83 respondents worldwide, there was no significant preference towards the MQA decoded version compared to an equivalent 24/96 high-resolution sample (17). On the one hand, this is good as it implies a level of transparency. However, certainly this was nothing like claims of “obvious” audible differences expressed in the press that MQA was “better than Hi-Res” due to the de-blurring and such!


    Over the years, others have documented in good detail subjective listening tests (18). We are at this point awaiting results from McGill University announced to be running listening comparisons between MQA and un-encoded audio (announcement of such tests in October 2017). Let’s see if they find clear differences.

     

     

     

    3. A broad “philosophy” – cui bono?


    Finally, we are confronted by MQA’s claims that they are promoting not just a “format”, but also a “philosophy” (19). They see it as a philosophy of breaking free from adjudicating quality as represented by traditional objective parameters like bit-depth and sample rate; that file size and bitrate does not correlate with sonic quality. Based on this view, MQA has determined that everything captured in the studio and what humans can hear can be “encapsulated” in the MQA 24/48 combination lossless-lossy container. In other words, they’re arguing that they know the full ability of human hearing based on “tremendous advancements … in neuroscience” and that as a result, a file format does not need to include the full bit depth (noise floor) or full lossless frequency response (sample rate) as in an original high resolution studio recording. If this is true, music labels can then just release all “hi-res” material in this single compressed file type.


    As much as MQA might detest comparisons, this is also no different from the basic goal of lossy encoding and implementing psychoacoustic understanding to audio compression as per MP3. The problem is that MQA refuses to acknowledge this! They seem to fear using the term “lossy” when by definition the encoding process is unable to exactly reconstruct the high-resolution data fed into it on playback.


    MQA also dissuades us from comparisons being made in the digital domain probably because this will show that the data doesn’t exactly maintain the original source quality; preferring to defer to some nebulous concept of an “end-to-end analog solution” (20). As I had recently expressed, this is not a full end-to-end analogue system if they cannot account for preamps, amps, speakers, and room anomalies simply because those are the components and factors most likely to affect what we ultimately hear! Furthermore, you would think the analogue output from two MQA DACs would appear to be “more similar” when decoding an MQA track, right? After all, it is all supposed to be “authenticated”. Alas, using a high quality ADC to record the output from a Meridian Explorer2 and Mytek Brooklyn DAC, the comparison results did not reveal any special correlation in sound quality between an MQA decode and standard PCM playback. As such, I have seen no evidence that MQA recorded from the analogue DAC output helps the listener approach some kind of idealized “studio sound” target (21).


    The fact that there is a lossy element as well as distortions as shown above is ironic considering MQA has as its strongest supporters reviewers and magazine writers who seem to have unwavering faith in their own subjective assessments of sound quality. Many of these individuals feel they can hear differences between cables and unusual “tweaks” of all kinds that are objectively unsubstantiated or unquantifiable. Yet when something is clearly quantifiably adding distortion and reducing resolution like MQA, these same individuals seem to describe “obvious” improvements!


    The ideas around end-to-end analogue, the claim that it can correct digital and time-domain errors that would benefit listeners, and the insistence that their “origami folded” 24-bit 44.1/48kHz files can deliver “master quality audio” are statements of faith around that philosophy being promoted. This might all sound good as talking points and for running ads but clearly lacking in concrete substance when we peer a little deeper.


    But wait, so far, we’ve only touched on one part of the “philosophy” promoted by MQA. Much of the rest of their philosophical ideas revolve around an uncomfortable business model that reaches broadly, affecting the whole production and playback chain. In February 2017, Linn was bold enough to post that they saw MQA as nothing more than an attempt at a “supply chain monopoly”. The result of which is a “tax” on hardware, software and the media, ultimately passed on to consumers of course. Should this “philosophy” be broadly accepted, and the business model successfully implemented, it would no doubt be good for MQA Ltd.’s financial statements.

    But who else might gain from this “philosophy”? I think we have to look at why the “Big 3” music labels seem to want to “get in” on this system. Warner Music was the first to make an agreement in May 2016, followed by Universal in February 2017, and Sony Music in May 2017. These entities control about 75% of the music market.


    Connecting the dots, we see that Spencer Chrislu (MQA Director of Content Services) acknowledged in August 2016: “If a studio does their archive at 24-bit/192kHz and then uses that same file as something to sell on a hi-rez site, that is basically giving away the crown jewels upon which their entire business is based” (22). What this basically implies is that MQA is a way to defer release of a full resolution “studio master”. An opportunity to sell music lovers a version that must not by definition hold the full value of said “jewels”. And so it goes, perceived opportunities to sell the same music yet again because the precious, awesome-sounding, crown jewels are safe in those concealed music vaults…


    We must then finally discuss the issue of Digital Rights Management (DRM). I know, MQA does not prevent one from copying the FLAC-compressed file. I acknowledge that MQA does not “phone home” to confirm access in order to play back. But let’s think about the definition of DRM broadly as defined in the Oxford dictionary (good enough definition as any):


     

     

    DRM.png

     



    Does MQA control digital rights using its technology to prevent “unauthorized” access? Of course it does. It requires licensing of software to decode the proprietary format, hardware manufacturers will need to work with MQA to ensure compliant firmware, and all music will need to be “authenticated” through an MQA-authorized fashion (23). Without authorization, one will have no right/ability to decode, listen to, access, or process the high-resolution data buried by MQA. If one reverse-engineered the decoding algorithm including the “access protection” mechanism and released the software to do this without obtaining permission/licensing, one would of course expect to be contacted by MQA’s legal team.


    Remember that MQA’s “authentication” is not just a common cyclic redundancy check or a flag to turn on a blue light like with Pono (24). It utilizes a 3072-bit signature embedded in the control stream used with a hash of the audio data and presumably some sort of key within the decoder software/firmware (25). If desired, it appears the encoder can be instructed to limit the quality of undecoded playback (by selecting the bit-depth of the control stream, below which resides the encoded data). Furthermore, at least in previous versions of MQA firmware last year, there was evidence of an ability to descramble purposely-affected data streams.


    The concept of embedded keys and having provisions for variable audio quality is not foreign to Meridian’s way of thinking considering their patent in 2014 (26) aiming to provide “conditional access to a lossless presentation” and “control over the level of degradation of the signal”. Even though the mechanisms described in the patent are currently not implemented in MQA, there is nothing to say they cannot be built within the infrastructure being created. Remember, in time, if MQA were to be successful, there would be increasing control over authorized playback software and device firmware across the product lines of various manufacturers. Since these are all reprogrammable software algorithms, currently absent “features” could be incorporated.

     

     

    The view from 30,000 feet and a birth of a new paradigm?


    Obviously, I have been presenting an opinion (with evidence) from a perspective that is far from flattering to the claims made by MQA. As a consumer and participant in this “music lover” “audiophile” hobby, I admittedly see very little to gain and clearly much to lose especially for the consumer’s freedom of choice.


    But what of MQA’s supporters? As time goes by, with each article published in the mainstream magazines, even those who seem to be in support of this “format” have finally made it clear that MQA is “lossy” in nature, seem uncertain of the claims about it sounding “better” than the original hi-res audio, and are not particularly convinced about the “neuroscience” claims. For example, let us have a look at Jim Austin’s latest article in Stereophile (March 2018 issue).


    That article actually summarizes well many of the suspicions that I and other critics of MQA have raised over the years and summarized above. The “case for MQA” from supporters sound very much like an apologetic defense for the music industry. The Stereophile article plainly asserts that the desires of audiophiles really do not matter: “The best we can hope for is a system designed to serve the interest of others – the industry, musicians, and casual (mobile) music listeners – but that is also good enough that we can live with it.” Yes, it’s not hard to understand how it may serve the Industry (ie. music labels) with preserving “crown jewels”, maintaining the perception of mystique so as to re-release albums, and DRM potential. But are we sure the artists will make any more money? And since when did the “casual mobile music listener” care about high resolution streaming considering the largest services like Apple Music and Spotify don’t even support 16/44.1 lossless (much less show a desire to increase their bandwidth to deliver 24/48 MQA)?


    Furthermore, since when did the audiophile press decide that supporting the Industry was more important than perhaps a bit of prudent objective analysis and being considerate of the inconveniences and upgrade costs that this might pose to their readership? Was there a thoughtful debate about this? Are they representing the interests of their readers, truly promoting “perfectionist audio”, or did they perhaps jump the gun a bit without thinking things through? By the way, can someone explain to me why I should consider investing in those $75,000 speakers, $3000 interconnects, and $2000 power cables advertised in the magazines if in the near future, perhaps the only new “high resolution” music I could buy is of the “good enough” and “lossy” variety?


    Jim Austin even said this: “Buy those 24/192 downloads while you can”. Why not also suggest: “Buy those 24/176.4, 24/96, 24/48, 24/44.1, unaffected 16/44.1, DSD downloads while you can”?


    Imagine a world where MQA is wildly successful and the only new digital releases from the major labels are in MQA. You can stream MQA, you can buy the MQA files, and even CDs are MQA-CD (“Buy those unaffected CDs before they all become MQA-CD remasters!”) (27). The unsuspecting music lover who has never come across a critical article on MQA might be impressed initially that these are supposedly “hi-res” 24/48 MQA files or told that the 16/44.1 MQA-CD contains some secret sauce that makes it sound amazing. Initially, the sound quality might be okay on all the equipment he/she owns. But over time, the encoding system starts to degrade the sound of the undecoded data. At some point, what if the undecoded file becomes something like only 10-bits resolution unless it’s played back through an MQA certified device?


    Before you accuse me of paranoia and courting conspiracy theories, we know that the MQA decoder already has a wide tolerance for how many bits were devoted to the PCM “baseband” on the encoding side (this is useful because some music may only need 14-bits so they can devote more data for optimizing “sub-band” MQA encoding). It’s not really a matter of whether the system can be made to “control over the level of degradation of the signal” through the encoding-decoding system, but rather a question of what assurance the consumer has that it will not be used for the purpose of forcing “obsolescence” on playback systems that do not implement MQA. Is it wise to accept this level of potential control in the long run by signing on to a closed system?


    Finally, suppose MQA enjoys a period of relative success and one buys a library of encoded albums. What happens if MQA for some reason goes out of business? Without updates and new devices incorporating the decoder, unless someone finds out how to decode MQA so it can be fully converted to standard PCM (like how HDCD generally can these days), that library of “high resolution” files might end up undecodable in future playback systems. This is one of the perils of orphaned DRM. Who knows if one might even see the rise of MQA 2.0 encoded media with “even better” quality that current devices do not fully decode or if the manufacturer is unable to provide updated firmware to support. What then? Buy yet another device that supports the newest “standard” when all along free and open options were always available?!


    As it has been said, the price of liberty is vigilance. The debates and questions raised here and elsewhere in my opinion are all part of the due process of assessing the value of this “philosophy” and how it affects the quality and freedoms we currently enjoy as music customers. This is true not just for today and MQA, but worth considering for whatever might come our way down the road. 


    Speaking of roads, I noticed that Jim Austin began his recent article with a quote from Yogi Berra – “When you come to a fork in the road, take it.”


    While catchy and cute, this of course does not apply here. The idea of facing a fork in the road is a false choice and at best wishful thinking for those promoting MQA. The “road” is already “well paved”; open, free, mature and robust (file formats), this highway already allows broad creativity and innovation without major licensing impediments especially for smaller companies, and has enough lanes to accommodate the needs of music lovers whether they’re happy with MP3 or desire huge DSD256 downloads. In my opinion, MQA is an optional turn-off at this point with little content (28) leading down an unpromising dimly lit narrow path with toll booths along the way. Should we bother with this detour?


    Ultimately, remember that the music industry can be wrong, audiophile magazines can be wrong, as an individual, I can be wrong (and my wife says I often am!). But the consumer is always right – which is exactly why “we” call the shots. Let’s see how this goes...

     

     

     

     

    Acknowledgements:


    I would like to thank Måns Rullgård (mansr) and Mitch Barnett (mitchco) for their generosity of spirit, allowing me to pick their brains, for providing stylistic suggestions, and for their time in reviewing this article. Also, a big thank you to my audio engineering friend for his invaluable insights, and willingness to run some MQA DACs through his Audio Precision gear for many of those graphs and measurements.

     

     


     

    Footnotes and Further Reading:
     

    1. I openly admit that passion can be a complex affair in audiophilia as discussed in MUSINGS: Passion, Audiophilia, Faith and Money.

    2. Early articles included entries from Stereophile and The Absolute Sound dating back to December 2014. Papers written by MQA include: AES paper “A Hierarchical Approach to Archiving and Distribution” (October 2014) and the JAES article “Sound Board: High-Resolution Audio, A Perspective” (October 2015).

    3. Such as listening sessions here, here, here, here, here, and here. Notice that The Absolute Sound even declared MQA “better than Hi-Res!” on an issue cover and the editor Robert Harley even states that “MQA is the most significant audio technology of my lifetime.”

    4. Even to the point recently in February 2018 where it was claimed: “most DACs can’t play MQA files, so they are already obsolete”.

    5. For example, the general technology site Trusted Reviews expressed the idea that MQA was aiming to be the “lead format” for streaming in early 2015. Note also that MQA Ltd. has since spun off independently from Meridian.

    6. Remember that although MQA can be encoded at different bit depths and sample rates, the typical stream is equivalent to 24-bits and 44.1 or 48kHz which can then be losslessly compressed with something like FLAC. The actual size of an MQA stream is therefore typically at least 30% larger than a lossless compressed 16/44.1 CD-quality PCM file.

    7. Like 2L and e-Onkyo Music. Interestingly, at one point in March 2017, HIRESAUDIO supposedly intended to stop selling MQA but I see they still have a number of MQA albums online.

    8. MQA-CDs!? Apparently, this is a good thing…

    9. Basically, the most significant bits of an MQA file is unencoded PCM (the “baseband” audio bits) so when you play these files through a standard DAC, the quality is claimed to be around that of 16/44.1 or 16/48 audio. The “sub-band” data bits, which are used to encode the MQA “hi-res” data, will act as low-level noise. More details later in this article.

    10. For a good academic review, see this paper by Perlman, in Social Studies of Science 2004 – Golden Ears and Meter Readers: The Contest for Epistemic Authority in Audiophilia.

    11. Finally, Jim Austin in Stereophile (March 2018 issue) acknowledges the fact that MQA contains lossy elements. See also my October 2016 article: MUSINGS: Keeping it simple… MQA is a partially lossy CODEC.

    12. Undecoded comparisons were made in 2016 when 2L released samples. Then in 2017, I compared tracks from Madonna, Buena Vista Social Club, and Led Zeppelin. Admittedly, my sample size for this is small and perhaps more work can be done to further explore bit-depth correlations in the future if anyone is still interested.

    13. See this interview with Bob Stuart where he describes MQA’s bitdepth as “typically 15.85” and “up to 17-bits” at 31:05. Although his claim is that these numbers reflect undecoded performance, I have yet to see evidence in actual music playback that MQA decoding retains >17-bits of resolution.

    14. The 5μs value showed up in the MQA Q&A article in Stereophile (August 2016) and I think is erroneously referenced. The best I can find is that this is referring to papers such as Kunchur’s “Audibility of temporal smearing and time misalignment of acoustic signals” (Technical Acoustics, 2007) with an estimated threshold down to around 6μs.

It is already understood that even 16/44.1 CD-resolution digital is capable of time-domain resolution of 110ps and MQA accepts the figure of 220ps.

    15. A good example of the link made between temporal accuracy and filters is this Sound-On-Sound article published in August 2016.

    16. Here’s a report from LAAS last year supposedly with an A/B test. I attended one of these disappointing MQA demos at the 2016 Vancouver Audio Show.

    17. You can read about the blind test “Core Results” here. There were also subgroup analyses where I could not find a preference even with audiophiles using more expensive gear. Finally, some of the subjective comments might be interesting. Notice that although not statistically significant, in many of the comparisons, there was a slight preference for the non-MQA version.

    18. Here is a listening test from August 2017 that I thought was well written and described at the Airshow Mastering Room studio.

Of note, there was a comment on Bruno Putzey’s Facebook page that MQA themselves did not run scientific tests. At this point, I have not seen any evidence in the literature or “white papers” from the company itself of controlled listening tests.

    19. You can read more about MQA described as a “philosophy” by Bob Stuart, expressed in this interview.

    20. This article in AudioStream (January 2016) suggests that MQA is capable of correcting digital anomalies and time domain anomalies by characterizing the whole audio production chain and playback components like “DSP loudspeaker”. Who knows, there’s a small chance that a full chain like this down to the level of these DSP speakers may achieve a higher level of accuracy, but this would be a rather rare and atypical system.

    21. Speaking of “authentication” from the perspective of the “studio sound”, what does this even mean? Just like the home user, each studio will have their own set-up with speakers, amplifiers and mixing consoles among multiple other devices used during production. Is there such a thing as a standard “studio sound” using MQA certified speakers? Of course not. Artists and engineers weave their magic using what they have at their disposal to create what was intended without needing to think about how MQA would “de-blur” the sound. As demonstrated earlier, MQA has the potential to alter that sound and the resolution captured in the studio. MQA supposedly delivering the final sound “as the artist intended” is more than a little hard to believe.

Recording industry members like Dr. Mark Waldrep (Dr. AIX) and mastering engineer Brian Lucey have been vocal about concerns regarding MQA. Worth reading their comments and impressions.

    22. This is of course the (in)famous “crown jewels” statement. Seriously folks, what crown jewels!? Sure, there are some good sounding recordings in the archives. But are they referring also to the multitude of slowly but surely degrading old analogue tapes in storage that have been re-released ad nauseum? Old digital recordings from the 80’s and 90’s done with archaic ADCs? Or some of the new recordings done with Pro Tools, many probably highly dynamically compressed?

Even if a label releases the “studio master” 24/192, that doesn’t mean a hi-res remix, or remaster cannot be created in the future. Consider the 2017 Eagles’ Hotel California 40th Anniversary with hi-res on Blu-Ray already released as 24/192 by HDtracks in 2013, or have a look at how many high-resolution variants of Kind Of Blue are out there including PCM and DSD versions.

    23. The underlying cryptographic signatures are provided by Utimaco’s infrastructure. There was also a presentation in December 2017 at the 34th Chaos Communication Congress (34C3) by Christoph Engemann and Anton Schlesinger describing MQA as “A clever stealth DRM-Trojan”. That description is likely quite accurate.

    24. Pono’s blue light was simply a flag that told the device to turn on the LED for files bought through the Pono Store… No actual checks to make sure the music data itself was error-free – see here for more information from the JRiver folks.

    25. Of interest, it appears that only the “baseband” bits are being “authenticated”. In an experiment here by FredericV, when the lower 8 bits are dropped, the MQA “blue light” still shines even though it’s recognized as 16-bit audio. Maybe this is all that MQA-CDs are?

On a related note, doesn’t the fact that this blind spot in the authentication mechanism exists immediately disqualifies the MQA blue light from being something a consumer should have any faith in that the file is of “guaranteed” provenance?

    26. Versatile Music Distribution” patent February 2014.

    27. First Major-Label MQA CD”. Steve Reich’s Pulse/Quartet. Knowing what we know about MQA, why is this any cause for celebration? CDs are 16/44.1, which means that the MQA control stream with cryptographic signature is now embedded taking up bits that previously would have been for the audio. This could be innocuous if the music’s noise floor is relatively high but it’s not like a 24-bit MQA file where there’s space for some amount of lossy-encoded data in the “sub-band”. It would be very interesting to compare the MQA-CD output with an actual 24/96 FLAC or even an unaltered standard 16/44.1 file.

    28. Recently on my blog, a reader posted a survey of Tidal finding 7406 unique MQA albums as of early February 2018. Considering that there is something like 48M tracks on Tidal, and say there’s an average of 12 tracks per album conservatively, this means that only 0.2% of Tidal content is MQA-encoded material.
       



    User Feedback

    Recommended Comments



    Who are you Mr. Kissoff?

     

    Do you get paid by any MQA affiliated entities?

    Share this comment


    Link to comment
    Share on other sites

    8 minutes ago, firedog said:

    Uh, I think you meant to write, “an adult who should be embarrassed to use his real name because now everyone knows he is either ignorant and doesn’t know what he is talking about, or just a shill for MQA who will say anything, true or not.”

    Or both.

    Share this comment


    Link to comment
    Share on other sites

    39 minutes ago, HalSF said:

    Tidal, the only hope for a modicum of mass-market success for MQA, is on thin ice. The tech world is dominated by audio skeptics who embrace 256 kbps AAC as a very high-quality standard (which it is) and who could care less about even 16/44 Red Book, much less high-resolution audiophile snake oil (as they see it). Ars Technica and Pitchfork looked at MQA and pronounced it meh. Any Google searcher exploring MQA quickly runs into Linn’s “Why MQA is bad for music” link and this forum’s “MQA is Vaporware” thread. 

     

    The idea that record labels are going to give MQA a sustained and committed push seems highly doubtful to me. So far Apple and Spotify are giving it a hard pass. Four years of not gaining momentum and traction is an eternity in tech.

     

    When you put it like that, what chance does MQA have?  The labels could decide to "switch on" MQA and only release new product in this format.   They would figure that the vast majority of their customers (i.e. the "audio skeptics") are just going to keep doing what they have been doing for the last almost 20 years and rip it to 256 (or less)...

    Share this comment


    Link to comment
    Share on other sites

    57 minutes ago, HalSF said:

    Tidal, the only hope for a modicum of mass-market success for MQA, is on thin ice. The tech world is dominated by audio skeptics who embrace 256 kbps AAC as a very high-quality standard (which it is) and who could care less about even 16/44 Red Book, much less high-resolution audiophile snake oil (as they see it). Ars Technica and Pitchfork looked at MQA and pronounced it meh. Any Google searcher exploring MQA quickly runs into Linn’s “Why MQA is bad for music” link and this forum’s “MQA is Vaporware” thread. 

     

    The idea that record labels are going to give MQA a sustained and committed push seems highly doubtful to me. So far Apple and Spotify are giving it a hard pass. Four years of not gaining momentum and traction is an eternity in tech.

     

    Yes but if DRM is what is attractive about MQA to the labels, none of any of this matters.

     

    If the mass market users are forced to MQA by the labels (i.e. their current options disappear) for DRM purposes,  with no price increase to their current Spotify or Apple Music subscriptions, they (the 99%) probably won't care, as long as there is no price increase.

     

    We (the 1%) will be the only losers in this 'dooms day' scenario.

     

    Watch this interview where he jokes about the possibility of one day removing poorer SQ streaming as an option... 46min to 50min... he's not saying forcing MQA on everyone explicitly there.

     

    But in another part of the interview he says Warner are fans of MQA.

     

    So as I said, it's all in the labels hands. We (the 1%) don't really matter in the big picture. Especially if DRM is what attracts them.

     

    In summary - I thought this dooms day scenario was a wild and improbable idea in my head , until I saw a label Exec joke about it :-)

     

     

    Share this comment


    Link to comment
    Share on other sites

    22 minutes ago, firedog said:

    Uh, I think you meant to write, “an adult who should be embarrassed to use his real name because now everyone knows he is either ignorant and doesn’t know what he is talking about, or just a shill for MQA who will say anything, true or not.”

     

    Well, in the video he states he's not supported by MQA. No reason to question that...

     

    It is a rather compelling audition nonetheless if MQA did want to support some promotion in the future!

    Share this comment


    Link to comment
    Share on other sites

    1 minute ago, Archimago said:

    Well, in the video he states he's not supported by MQA. Not reason to question that...

    He has a price list. Of course, we don't know whether MQA has paid him, but the possibility is quite real.

    Share this comment


    Link to comment
    Share on other sites

    1 hour ago, Em2016 said:

     

    Yes but if DRM is what is attractive about MQA to the labels, none of any of this matters.

     

    If the mass market users are forced to MQA by the labels (i.e. their current options disappear) for DRM purposes,  with no price increase to their current Spotify or Apple Music subscriptions, they (the 99%) probably won't care, as long as there is no price increase.

     

    We (the 1%) will be the only losers in this 'dooms day' scenario.

     

    Watch this interview where he jokes about the possibility of one day removing poorer SQ streaming as an option... 46min to 50min... he's not saying forcing MQA on everyone explicitly there.

     

    But in another part of the interview he says Warner are fans of MQA.

     

    So as I said, it's all in the labels hands. We (the 1%) don't really matter in the big picture. Especially if DRM is what attracts them.

     

    In summary - I thought this dooms day scenario was a wild and improbable idea in my head , until I saw a label Exec joke about it :-)

     

     

     

    Probably good to point out I know him and he doesn't work for Warner Music Group anymore. He was out the door at RMAF 2017. 

    Share this comment


    Link to comment
    Share on other sites

    13 minutes ago, Rt66indierock said:

     

    Probably good to point out I know him and he doesn't work for Warner Music Group anymore. He was out the door at RMAF 2017. 

     

    Noted but that’s only relevant if we know he or Warner changed their position on MQA in the last 6 months, since that video.

     

    Definitely possible, this is a very fluid/dynamic topic atm.

     

    Share this comment


    Link to comment
    Share on other sites

    9 minutes ago, Em2016 said:

     

    Noted but that’s only relevant if we know he or Warner changed their position on MQA in the last 6 months, since that video.

     

    Definitely possible, this is a very fluid/dynamic topic atm.

     

     

    WMG isn't known for making good business decisions but any labels main focus is stars not formats. I'm not hearing much about anything related to quality lately except for the the guys whose job it is to promote hi-res. 

    Share this comment


    Link to comment
    Share on other sites

    11 minutes ago, Rt66indierock said:

     

    WMG isn't known for making good business decisions but any labels main focus is stars not formats. I'm not hearing much about anything related to quality lately except for the the guys whose job it is to promote hi-res. 

     

    Do you work for any of the majors? Or is this based on 2nd hand news?

     

    I don't mean that to insult either btw so please don't take offence. But something is either 1st hand news or it's not.

     

    The source for my argument is a label exec (at the time) on video joking about the point I was making.

     

    Absolutely nothing against him personally. It's a great video actually and he seems like a really cool dude. With a nice home system too, by the sounds of it. I don't want the focus to be on him personally because that's not fair, but instead on the label/s.

     

    Share this comment


    Link to comment
    Share on other sites

    2 minutes ago, Em2016 said:

     

    Do you work for any of the majors? Or is this based on 2nd hand news?

     

    I don't mean that to insult either btw so please don't take offence. But something is either 1st hand news or it's not.

     

     

    I'm on the artist and studio side of things from a professional point of view to directly answer your question.  So I interact with A&R people but it isn't the main focus of what I do professionally. I'll have more first hand information this summer after I hit a few festivals.

     

    But in all honesty most people in the music business want to talk to me about golf.

    Share this comment


    Link to comment
    Share on other sites

    6 hours ago, Archimago said:

     

    "Lossless music registration"? Anyone use this kind of terminology? Is this even a "thing"?

     

    Clearly this man has not explored or understood the issues with MQA. Furthermore he has not understood nor does he seem capable of showing that which he speaks of - whether in the "diminished transient response" or his belief in what kinds of "errors in the clock signal (that) cause jitter". He has talked about this as if with authority in the past.

     

    All kinds of confusion and conflation to make comparisons that are inappropriate.

     

    The crux of the argument for him is essentially this:

    "So, like conventional PCM and DSD, MQA is not without losses but to my experience suffers from less loss than regular PCM and DSD, DSD being second best to my ears and over my equipment."

     

    Yeah... Real scientific there. His ears. His equipment. How would he know there was "less loss"? I've never seen him give an example of what song or piece of music he's referring to to make such a comparison.

     

    One obvious and gross error (along with a jab):

    "The technical difference for techies without ears is that where in regular PCM - once digital - every bit remains intact, MQA uses a lossy compression for the signals above 48kHz."

     

    Sorry Hans - pardon me if I think that the ears/brain of many of the tech folks including mine might be a tad more perceptive, if not at least younger with better frequency response. Major error there dude setting 48kHz as the boundary!

     

    And how does he know that:

    "The MQA circuits used in MQA DAC's does even sound better when non MQA sources are used."

     

    What "MQA circuits" are you talking about? Again, clearly this man doesn't understand the system itself.

     

    Fascinating that he lists a number of negatives about MQA in the latter part of the video, but just lets these issues pass... Seriously, it's not about the "angry mob who don't want to pay the license fee" that's a problem. It's the fact that MQA is not what he thinks it is and the fact that he says these erroneous things in support of folly that (at least personally) creates a sense of disgust ("mad" is not the correct emotional label when I listen to the claims such as the ones in this video these days).

     

    So he wants audiophiles to spread the word about his channel at the end so he can keep people "informed". Apologies if I stay away from my PayPal account and Patreon as I have no desire to support misinformation.

     

    @Archimago thank you for the "cut through". You saved me the effort of pointing out the flaws that I could spot in the HB presentation. Leaves me at this point with the thought of "bring on the McGill Uni study".

     

    If the analytical work by Meyer and Moran are any guide:

    Audibility of a CD-Standard A/DA/A Loop Inserted into High-Resolution Audio Playback

    Quote

    Claims both published and anecdotal are regularly made for audibly superior sound quality for two-channel audio encoded with longer word lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD standard. The authors report on a series of double-blind tests comparing the analog output of high-resolution players playing high-resolution recordings with the same signal passed through a 16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year using different systems and a variety of subjects. The systems included expensive professional monitors and one high-end system with electrostatic loudspeakers and expensive components and cables. The subjects included professional recording engineers, students in a university recording program, and dedicated audiophiles. The test results show that the CD-quality A/D/A loop was undetectable at normal-to-loud listening levels, by any of the subjects, on any of the playback systems. The noise of the CD-quality loop was audible only at very elevated levels.

    we'll see confirmation of the ABX work you've reported on in your blog.

     

    Interestingly, the Meyer Moran paper points to this commentary from another AES Journal Paper as part of the motivation for their study, dating back over ten years now:

    Quote

    As a licensor asserted in these pages [1],
    A long-term audiophile criticism of the CD has been that it lacks the resolution to reproduce all the detail in a musical performance. ... High-quality audio practice now recognizes the CD channel as a "bottleneck" ... Higher resolution audio promises better sound than the CD, and the potential for this has already been demonstrated in carriers that permit a wider frequency response ... and greater dynamic range. ... [E]x-perience shows and anecdotal evidence suggests that higher sample rates "sound better." Typical observation are that with higher sampling rates the sound is clearer, smoother, has improved low-frequency definition, and is more "natural." In the author’s experience higher sample rates can lead to better fore-ground/background discrimination. "Objects" are better separated from the acoustic and therefore sound clearer and more "complete."

    The similarity of language used to describe MQA by it's co-inventor with the description above is no coincidence (but it's certainly amusing to reflect upon). I've not fully read the paper that the above quote is extracted from yet but a quick skim made me notice this comment:

    Quote

    The author uses auditory modeling to illuminate the discussion in this paper, the background for which is fully
    explained in [7] and [12].

     

    From my own technical background, I know that modelling can only take you so far (George Box's advice is often quoted, "all models are wrong, some are useful" or something along those lines). Without empirical testing and validation, theory and models can quickly lead you down the garden path to leave you dancing around the magic mushrooms with the pixies and the fairies. To take Box's point, if you don't test your models empirically, it's not possible to understand the strengths and weaknesses and ultimately their reliability.

     

    If MQA had been subjected to the rigour that Moran and Meyer applied (here's the testing detail from their paper that the abstract above alludes to):

    Quote

    With the help of about 60 members of the Boston Audio Society and many other interested parties, a series of double-blind (A/B/X) listening tests were held over a period of about a year. Many types of music and voice signals were included in the sources, from classical (choral, chamber, piano, orchestral) to jazz, pop, and rock music. The subjects included men and women of widely varying ages, acuities, and levels of musical and audio experience; many were audio professionals or serious students of the art.

     

    Most of the tests were done using a pair of highly regarded, smooth-measuring full-range loudspeakers in a rural listening room with an ambient noise floor of about 19 dBA SPL, all electronics on (see Fig. 2). We also took the test setup to several other locations: a Boston-area mastering facility with very large four-way studio monitors; a local university audio facility, again with large high-powered monitors in a custom-designed listening space (the subjects for this test were students in the recording program); and a private high-end listening room equipped with well-reviewed electrostatic loudspeakers and very expensive electronics and cables. In all venues we performed informal tests of the subjects’ upper hearing limits to see
    whether there was a correlation between this parameter and the audibility of differences.

    We would already have an answer that would put all this angst and debate from the last couple of years beyond doubt to even the greatest proponents of the format.

     

    Thanks again for your efforts.

    Share this comment


    Link to comment
    Share on other sites

    11 hours ago, Ralf11 said:

     

    New Shill Alert for IndyDan

     

     

    How many CA accounts are in sleeper mode, to be used after a long time?

     

    • Content count

      1
    • Joined

      December 14, 2016
    • Last visited

      Thursday at 04:43 PM

    Share this comment


    Link to comment
    Share on other sites

    1 hour ago, FredericV said:

    How many CA accounts are in sleeper mode, to be used after a long time?

    It's probably possible to sell old accounts to troll/shill farms.

    Share this comment


    Link to comment
    Share on other sites

    Here is a quote from the latest Stereophile April 2018 issue:

     

    ...I don’t believe that, over long term, MQA is in the best interest of audiophiles. I just hope it’s not too late — Jon Iverson, “As We See It

     

    Sudden change of heart? 

    Share this comment


    Link to comment
    Share on other sites

    3 hours ago, firedog said:

    The Meyer Moran study has been fairly thoroughly discredited. Even one of the authors said he no longer stands by the conclusions. 

    One of the big problems with the study was that they didn't find out the provenance of SACDs they used, and several of them were produced from upsampled Redbook. I wouldn't exactly call that testing "rigour". 

     

     So their study wasn't comparing hi-res recordings to Redbook at all in those cases, it was comparing Redbook source to Redbook source. And somehow they got to the conclusion that there was no discernable difference between Redbook and hi-res.

     

    There were also some statistical issues with the study that put the findings in doubt. 

    And as far as studies go, see this: http://www.aes.org/e-lib/browse.cfm?elib=18296

    And that meta-analysis rejected the Meyer-Moran results for inclusion as being statistically suspect, i.e.,  results appearing to not be statistically random.

     

    I'm not actually arguing the point of whether hi-res is audible - I'm just arguing that the Meyer - Moran study isn't where you should go if you want scientific proof it isn't. 

    Thanks for pointing this out. Looks like I've got some reading to do.

     

    From a quick squiz, this fella Josh Reiss has done a comprehensive piece of work. It provides an example for those pushing MQA to think about.

    Share this comment


    Link to comment
    Share on other sites

    On 3/8/2018 at 12:50 AM, ednaz said:

    Watching the arguments of what's kept versus thrown away, what's real and what's invented, is it debarring or blurring, is it just upsampling, wait is that noise, brings to mind something from another domain - photography.

     

    When printing digital photographs at display sizes - 16x20 inches, 20x30 inches and larger - professional printers, the type that would print images for a gallery or museum show, do a couple of tricks to every image, just before printing. (Learned these working for a famous NYC fine art printer.)

     

    First, they apply an unsharp mask to increase the apparent sharpness (de-blurring), which paradoxically works by applying a mildly blurred image as a mask on to the original image. De-blurring by adding a mask of blurring. It raises the apparent sharpness of the image. Done well, it's not noticeable. Done poorly, you get visible halos in the image. Note that even done well, there are halos - the unsharp image absolutely makes them, but they're below your ability to see them - a pixel or two wide. (I learned to do this in film days. Much easier in digital.)

     

    Second, they add noise to the image. Everyone evaluates digital sensors based on their ability to produce an image free of digital noise... but completely noise free images look odd. In large areas with no detail - sky, a car fender, still water - they look artificial and plastic. The printer uses one of a number of techniques to generate digital noise that's similar to film grain, and blends it into the image. The size and frequencies of the noise are based on the size of the final print. Again, the goal is to have it be there and effective but not noticeable. (When I show people prints where I've done this, they have a hard time detecting it, even after being told what to look for.) That added noise does three things. It makes the image seem more real, and less digital. It reduces the visibility of actual digital noise from the sensor. And, it also increases the apparent sharpness of the image.

     

    In photography - which is about capturing the most accurate renditions of light and color with a recording device and then reproducing them for viewing - adding information that was never there to begin with increases the perception of it being a more accurate and sharply rendered image of the real world.

     

    I imagine that the same types of tricks, applied to audio files, may improve the apparent accuracy and crispness of the rendering of recorded sounds. After all, we see and hear with our brains, not our eyes and ears.

    I don’t mind preference of others to be “cheated” to their liking (by “beautifying” music reproduction).

     

    I demand “raw” music (lossless, bit perfect) for myself in a format, so I can “modify” it, if I am in a mood or need, to my liking. 

     

    After modifications were already applied to source (in order to “cheat” statistically  average, typical taste), it is very hard, sometimes impossible, to undo them by consumer. And audiophile herds demand more than average or typical consumer.

    Share this comment


    Link to comment
    Share on other sites

    12 hours ago, Rt66indierock said:

     

    But in all honesty most people in the music business want to talk to me about golf.

    Yes!  I would too!  This MQA discussion is going in circles. Wayyyy more interesting to talk about new irons, etc. 

    Share this comment


    Link to comment
    Share on other sites

    On 3/8/2018 at 5:51 PM, ednaz said:

    I make photographs feel more real, and more natural, by adding distortions that aren't there in the original image. Some it is adding artifacts and noise. Some of it is excluding information. The image is qualitatively improved by quantitatively degrading it.

    Did you cheat test audience by showing them original photography and results of your processing against another, but similar, photography (because different source allowed you to obtain qualitatively better results)? 

    Did you stear test audience by telling them what photography is original and what is result of your improvement process?

    Share this comment


    Link to comment
    Share on other sites

    I didn't invent this. It's standard practice among fine art photography printers, like Duggal Imaging in NYC and Nash Editions in CA. Has been for years (I apprenticed for a couple months at a couple of printers back in 2001 just to learn technique, and those guys had been doing it since professional digital cameras were 3 megapixels.)

     

    The whole process of sharpening is  completely about adding artifacts, and that's been done ever since there was photography and a desire to create the impression of sharpness.

     

    The brain sees things, not the eyes. Every one of those techniques is about changing how the brain perceives things. The same is true about sound - the ears capture but the brain hears. Hence psychoacoustics.

    Share this comment


    Link to comment
    Share on other sites

    48 minutes ago, maxijazz said:

    I don’t mind preference of others to be “cheated” to their liking (by “beautifying” music reproduction).

     

    I demand “raw” music (lossless, bit perfect) for myself in a format, so I can “modify” it, if I am in a mood or need, to my liking. 

     

    After modifications were already applied to source (in order to “cheat” statistically  average, typical taste), it is very hard, sometimes impossible, to undo them by consumer. And audiophile herds demand more than average or typical consumer.

    You do realize that most studio albums where some performers were in soundproof booths have reverb added? I've heard the raw tapes (a big part of my photography was for jazz and blues musicians for CDs and PR shots) in the studio. "Dry" sax or trumpet doesn't sound pretty.  I've watched the sound engineer add slightly different reverb to different instruments because just adding it overall sounds artificial.  And that's just one of the normal things done in the recording and production process.

    I think the only place you'll actually get raw music is in the performance itself. And, most of what's done in production is to make it feel more real and more alive. Psychoacoustics is a real thing.

    Share this comment


    Link to comment
    Share on other sites

    6 hours ago, mansr said:

    It's probably possible to sell old accounts to troll/shill farms.

    If only MQA was a russian company...

    Share this comment


    Link to comment
    Share on other sites




    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now




×
×
  • Create New...