Jump to content

Axon

  • Posts

    28
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie
  1. Driver quality has really gone down the tubes in recent years. You see a lot of corner cutting in what is/isn't offloaded to the CPU. I see the same issues with my Dell Vostro - dropouts are extremely common in foobar2000, and the sound quality itself is pretty terrible, with bad power supply rejection. I can't say this is your issue with certainty but it is definitely worth testing. I would suggest disabling WiFi on the laptop, and going to Display Properties->Advanced->Troubleshoot and turning Hardware Acceleration to None and disabling write combining, and seeing if either change improves the dropout problem. If the former fixes things then I'd say you're looking at either keeping yourself tethered to an ethernet cable or a USB WiFi adapter. If the latter fixes things, you'll need to move the acceleration slider back up until the pops reappear then back off to determine the maximum safe amount of acceleration (and play with the write combining setting too). You won't have these issues with a Mac. One of the things you get with an expensive computer is a well built driver stack. But of course, there are lots of Windows laptops you won't have these issues with either.
  2. First of all: USB was specifically designed to NOT be a load bearing connector. It is explicitly designed to disconnect with a minimum of force. (Contrast this with serial cables with screw-in connectors, where a judiciously forced kick or yank could cause catastrophic cable or system damage.) What you are describing there specifically is a feature, not a bug. Would *any* USB socket deal with your aftermarket cable effectively? Moreover, pro audio connectors like XLR and TRS are nonlocking as well. Again, there are extremely good reasons why audio and computer connectors should be nonlocking. RCA is a mess and the heap of hate you pile on RCA connectors is entirely warranted. That said, it's also a mess in the larger audio world, with eg Radio Shack Gold Series cables are a phenomenally tight fit. Consumer audio gear, like all consumer products nowadays, are kinda crap. Planned obsolescence, etc. That said, much of their crappiness (as you state) is due to mechanical failures, and when no mechanical force is applied, and whatever mechanical devices that exist internally (eg fans) are in good shape, they last a surprisingly long time. Pro audio really is designed for use and abuse, just like in earlier generations. But once you go that route you're often talking rackmount equipment and $$$. And fewer audiophile features. What did you expect? The ruggedized equipment is, well, ruggedized. It's also a low volume market. It's still worlds cheaper than it was 20 or even 10 years ago. --- Vista is a hog, but if you can tolerate lots of DSP in your signal path, it does have some room correction and dynamic range compression features that are well worth investigating (I haven't). It certainly has many innovative technologies compared to both XP and MacOS X.
  3. The truly ironic thing about all of this is that AIFF really does use substantially more computer resources than ALAC, when you think about it: the hard disk is accessed more often, with more data. ALAC uses more CPU time to decode, but again, such time is extremely cheap nowadays. In reality, for most media situations, issues related to CPU consumption, latency issues, underflows etc are the fault of specific drivers which hog the system for too long. There are some surprising culprits here (including display drivers!). But again, when this actually becomes an issue, it will cause audio cutouts and gross distortion - and until then, it is not an issue whatsoever.
  4. Timing "info" is almost entirely hardware-controlled in nature. Typically, you have a crystal oscillator on the sound card that is divided down to the (oversampled) sample rate, and the actual digital-to-analog converter of the DAC, and all upstream sample-by-sample data processing on the sound card, is triggered off of this crystal clock source. The division factor that generates the sample rate is controlled by the driver (and thus by software) but that happens once, when the sound card is first accessed, and never thereafter. There can be a few complications to this process - sometimes the sample clock is generated from a PLL, sometimes it's handled entirely inside the DAC, sometimes the DAC requires a divided-down clock from the crystal so there are two divisions; external clock sync scales an external clock up to the frequency to drive the DAC; more expensive clock sources than crystals might be used; etc. The fundamental operation remains the same, and the playback software plays no role in how it's done except to state the sample rate of playback. Software feeding data into the sound card can do whatever it wants with the data as long as it matches the format the sound card expects and enough data is in the memory buffer when the sound card asks for it. Oversampling, upsampling, non-integral sample rate conversion etc is all handled through the general theory of sample rate conversion. Study up on SSRCs vs ASRCs. Typically this is done entirely on the sound card, is completely transparent to the sample buffers and the software, and the software has very little reason to do upsampling on its own.
  5. I believe I already explained, at great length, why ABX testing of different lossless codecs is, in my opinion, a bad idea. As well as, even if it were not a bad idea, manufacturers and audio engineers and scientists who are otherwise pro-DBT have far more productive things to do with their time than test lossless vs lossless. For some components like speakers, having an average listener conduct a listening test is hard. For virtually everything else - amps, codecs, and computer-related stuff especially - pretty much everybody has whatever tools they need at their disposal, besides their time. For codecs - neglecting the very substantial issues with all lossless vs lossless tests as I described earlier - ABX testing is extraordinarily easy, and very often done. You are free to have whatever preferences you want, but anybody else is just as free to discount it for scientific reasons. Like I said: opinions are to be respected; hypotheses, or assertions of fact, are not. And saying something like, eg, "I like WAV because it sounds better than FLAC", expresses both an opinion/preference ("I like WAV better than FLAC") and an implicit fact ("WAV sounds better than FLAC, at least for me"). Put another way, there are plenty of preferences I hold that I choose not to express to others because I cannot back them up. Of course it's fine to discuss how such proof may be obtained... as long as such discussions are not misleading. And a big part of my discussion here has been with how much harder that proof is to obtain than many people believe.
  6. "It is interesting that several of the claims that are up for grabs—that SSDs and additional RAM improve sound quality, for example—seem also to have something to do with management of bits in time. Perhaps what has been called “jitter” is a much more pervasive phenomena, having to do with the movement of acoustical data, than has been assumed. " This is going to be a little harsher than my past posts but I believe it is appropriate. I think it is far more likely that those who are claiming "There is a difference! I KNOW I heard it!", and are looking hard at jitter, are simply grasping wildly for the first explanation that has even a shred of numerical measurements behind it - without allowing a simply far more probably explanation, ie that their perceptions were fallacious to begin with and no difference was originally heard. Quite simply, once jitter is eliminated from consideration, there are no other digital audio flaws available to explain any of this - but to say "it has to be jitter if it's not anything else! That would explain everything!" has some, uh, significant logical flaws. Look, guys. I know it's harsh to say "I think your perceptions fooled you". Sometimes that charge is levelled against people who really do hear such things. The important thing is, those people come back and prove to everybody else, quite conclusively, with ABX results, that they really did hear those things. And then other people backed them up and the whole business is wrapped up. Or they disappear and everybody forgets about them. That is the way it is supposed to work. Extraordinary claims require extraordinary proof, and bits!=bits is truly extraordinary. In such a situation I think to say that "I KNOW I heard a difference between these two formats" is simply not believable. I'm sure these people heard something - but it doesn't necessarily have anything to do with the format! I've personally been far more doubtful of my personal perception of effects that have blatantly obvious numerical measurements behind them. The smaller or more theoretically dubious an observation is, the more relatively likely that it was, frankly, placebo. And I'm just as commonly affected by placebo as anybody else. Until proof is established otherwise, nobody is under any obligation to respect such conclusions that rely so much on blind faith in perception. Opinions are to be respected. Perceptions are to be respected. But hypotheses based on those perceptions (which we have opinions about) have every right to be torn to pieces if the evidence demands it. "I thought the AIFF version of the music sounded better than the ALAC" is a perception. "ALAC sounds better than AIFF" is a hypothesis. The former is incontrovertible; the latter requires strong proof to establish. So, about measurements. Jitter is one of those things, like resistance, that, if you have a measurement system accurate enough, it will measurably change if you do anything whatsoever to the system, no matter how unrelated or impertinent. Jitter will change with driver changes. Different DACs. Different driver versions of the same DAC. Different ambient temperatures. Etc. I think it is fairly possible that a test could be devised which shows a difference in jitter spectrum between two different lossless formats. At the same time, you will probably need to do incredibly twisted and sick things to the system configuration that will not occur in real life playback. Just because somebody has found a difference in one particular configuration doesn't mean that applies anywhere else. Even if you had such a result, you would need to show: How applicable is this result to all computer playback? How audible is the difference? Does the existence or nonexistence of the difference accurately predict the difference being audible? It's exactly like I said before: all of you are so focused on one tiny little aspect of all of this, that you are ignoring the big picture, and how incredibly far away you are from making a firm and certain proof of any of this. Nevertheless, I predict there will be many that would take any evidence whatsoever of a numeric difference between AIFF/ALAC - no matter how flimsy - and claim this is the "proof" they have been waiting so long for. And nobody else will be convinced by them and absolutely nothing will be accomplished by any of it, except perhaps that some people will sleep slightly better at night. This applies just as much to ABX results as to measurements: if 20 people here ran an ABX test with a false positive rate of 5% - the "usual" standard for ABX tests, one guy is going to get it right by chance - and there really have been cases in the past where people claimed that guy really could hear something. That's a blatant misreading of the statistics. What I'm trying to say here is: it is completely correct, and ethical, to second-guess one's own perceptions. And to second-guess others's perceptions. And anybody who actually tries to test this, numerically or otherwise, needs to have their head screwed on very tightly. Just because somebody waves around positive results doesn't mean they're valid!
  7. Heh. So, how much more effort are you going to put into the Stereophile ruckus? Let me know anytime if I don't need to try to defend you. I wish I remembered where I heard that rumor about racial perception of loudness. Anecdotally, the only particularly specific thing I can recall is that some Head-Fi people have commented there are a lot of Asian fans of the Etymotic ER-4B, which is a diffuse field equalized IEM, and few people had any earthly idea why so many people would consistently like a treble boost that big. And of course, there is the whole debate about different mastering styles for different international markets (which at least in the classical vinyl world has been documented in old issues of Gramophone), to match the debate over different speaker styles. I suppose I should ask on HA if anybody else remembers anything about this? Re crossovers: ah, of course - it does largely boil down to cost in the end. Which, of course, is not a bad thing. --- It's a shame you came onboard here a few days after the ALAC vs AIFF thing, because I opined at great length at the interpretation of blind test results and on larger epistemological issues in audio, and I would have appreciated some professional input on what I was saying. Without referring you to a 200-post thread, which I'm sure is something you're *dying* to do on your vacation, I offer you to pop a few of my balloons : Would you agree that, even though the statistical results of a blind test cannot be used to prove a null hypothesis (in the negative result) nor even perhaps reject it (in the positive) with exactly 100% certainty, is it still possible that a rational observer could come to either conclusion, once type I or type II error has been made arbitrarily low? Would you also agree that, given a blind test designed to detect engineering/mathematical impossibilities - lossless != lossless, or nyquist's theorem is wrong, etc, once all numerical differences between devices under test have been confirmed not to exist or are not pertinent - that such a test is internally inconsistent and can never be meaningfully constructed? Would you also agree that the theory (and I daresay, paradigm) of sighted, subjective listening is internally consistent, and for its adherents, has meaningful answers for all observed results - and therefore, efforts to disprove it cannot exclusively rely on scientific or statistical evidence? --- Also, there was a question going on between AV-OCD and I which I think you're the best person to ask. Suppose that, contrary to what is stated in objective measurements, an end user, in their own living room or dealer room etc, gives a sighted preference to a speaker which measures more poorly than some other tested speaker. How should an end user go about resolving this contradiction?
  8. First I see you on HA, then Stereophile, now I see you here. It's like the gods are mingling with the mortals or something. I took the liberty of the mindless link propogation onto HA, but I fully expect all the interesting action (and/or trolling) to be occurring on here and Stereophile. Heh. Thank you very much for this blog post. Of course this is "kinda" old news (from 1994) but the results and implications are obviously not well known to the public. I recall some speculation once about different characteristics of hearing for different ethnic groups - that, eg, some of the brouhaha about differences between measured equal loudness curves might be attributed to different groups from around the world providing varying amounts of data for each curve. Was there any conclusion to this speculation? Does the demonstrated nonexistence of regional speaker preferences imply that such regional/ethnic listening differences are not shown to exist? I don't have the original paper (or AES E-library access anymore) to look into this more closely, so I'll just ask you. Statistically, was this conclusion of the equivalance of German voicing to other voicings stated in terms of type II error or beta < 0.05? Or is there an alternative means for proving this? Statistical equivalence has always been a big sticking point; there has been talk on HA of noninferiority testing being required for that, with a potential requirement for many thousands of trials in some circumstances.
  9. Well, reasonable man can agree to disagree insofar as they have good reasons to do so. Many times further communication simply will not accomplish anything, or it has a strong chance of coming to blows and/or just being embarrassing. So far, there are a huge number of questions that can be answered that are not *quite* going around in circles yet (although it's getting pretty close to that). So I'm pretty surprised that this discussion hasn't went as far as that yet, actually. I still agree with krab's interpretation of your reply. That you bundle opinions and theories into the same class, and and refer to them as "little more than popularity contests", and mention "absent universal truths, ..." in your earlier message seems to suggest that there's a very substantial gap in communication between us. IIRC, I never really said that "opinion" had anything to do with this. The process of a scientific revolution - a paradigm shift in the correct sense of the term - is brutally rational. You seem to disagree with that idea, though. All I'm saying is, it's rational, but it cannot be justified in the vocabulary of the theories in question. It can be justified in many other ways, though. That doesn't sound like a popularity contest to me! Even though I can operate under some clearly obsolete theory like phlogiston or epicycles or a steady state universe, and maintain some sort of internal consistency so that it cannot be evaluated as "false" in the modern theory, I can still come up with many good, objective facts as to why doing that would be a pretty sh*tty idea. It's just that those statements refer to aspects of the theories themselves as evidence, rather than scientific evidence itself (which the theories themselves are concerned with). So you can't refer to those facts as being "correct" in relation to that scientific evidence. I think the statement "this theory X is a poor description of reality for those reasons" is the sort of thing you're categorizing as an opinion. I think it's a statement of fact, and it can be proven and disproven accordingly. Of course a science field can turn into little more than a popularity contest. (This exact debate is going on in the string theory world, of course, and it also gets brought up a lot when discussing certain field of experimental physics that are particularly favored or shunned for funding.) That doesn't mean they all are, nor does it mean that it isn't a strawman. There are still *facts* that can be observed as to how that field is operating, that can be used to confirm or deny this particular point. Also, there's a significant political undercurrent as to why krab might have been particularly miffed by your post. The notion of scientific consensus originating in opinion, popularity contests, etc sounds extremely close to an outright claim of scientific relativism, and some people have taken this idea to that conclusion, much to the chagrin of the people who first thought of it. In particular, this sort of thinking has been used to attack mainstream science and instead support creationism. It's a hot button subject.
  10. I disagree with your premises - they represent a misunderstanding of the engineering of music playback on computers - but I'd like to address a couple other points first. Such a jitter hypothesis, if true, means (at the least) that the sidebands on a 10khz tone should predictably differ between different lossless formats, and in a statistically meaningful fashion. That's easy enough to test. But you have to be extremely careful in how you interpret this stuff. It's really easy to get carried away and see stuff in a chart that isn't there. (Again: I *urge* you (and everybody else here) to read the Langmuir paper.) I would suspect a test with sufficient analysis power has not been conducted yet, and I can probably offer some contributions on that if somebody winds up fronting me the WAVs (but ask me what is needed first). Of course, even if a difference is found, you then need to establish that such a difference is audible. Y'all are forgetting that there are multiple steps here Your hypothesis is very heavily weighted towards the system changes you (as a user) can make that are *easy* or *obvious*. This sets off a lot of alarm bells with me. This is a common mistake that audiophiles make - our thinking (I'm occasionally guilty of it too) is often limited to the things that are immediately visible to us, while being ignorant of a much wider number of things. But you have to take the wider picture into account, and I do not believe that affects your hypothesis favorably. Doing stuff on a computer is easy. Adding an external DAC is easy. Switching music players is easy. But the truth is that if what you're saying is true - if low-level timing statistics of the OS, such as task switching and interrupts, really do matter, so many other things you *don't* know about will also affect the results to a degree that will make any kind of correlation almost impossible. Music will sound different depending on: whether Windows Update, or an antivirus package or whatever, is operating invisibly in the background where the music file is placed on the hard disk because of differing access times what patched version of OS kernel you are running - in other words, every time you reboot for Windows Update, it should sound different whether the music is read from disk or off the network whenever Google Chrome updates, if you've got it installed based on the state of the CPU cache, which may or may not be in a steady state at any given time. ... and many, many, many other factors I'm not even listing and may even be specific to your computer. All of those things should matter just as much as the statements you put forth. And yet.... the only claims you make involve the things that either a) directly affect the playback of the music in a patently obvious way, or b) have been mentioned somewhere in this thread. (Not to mean anything rudely, but would you even know about RTOSs if we didn't bring them up?) All these things that occur rather randomly, every hour of every day - including while the listening was performed to come to the conclusions you reach. Things that would have caused large amounts of audible deviation even while the listener may have *thought* all the variables were held constant. That such deviations did *not* happen - ie, that such changes like AIFF/lossless, different music players, different system loads and the like cause very consistent changes in sound to those who claim such differences exist - thus strikes me as being not only highly suspect from an engineering and psychoacoustic point of view, but internally contradictory as well. For OS-level scheduling stuff like AIFF/lossless, system load etc to make an impact, the scale and tone of the discussion would be nowhere hear the course it has been taking. And that is yet another reason why I am saying that you and everybody else here should be extremely skeptical about this idea. Anyways. iTunes and all other music players do not contain timing data. Music players do not send timing data to sound cards. It's nothing but samples. While the player certainly *affects* how timing is done, in terms of setting sampling rates and whatnot, the actual timing "data" - the triggering of updates to analog outputs - is handled entirely inside the sound card, with an internal clock source like a crystal (or possibly an external clock source). As I believe I mentioned before in the HA thread, and was mentioned to Tim again just recently by somebody else, there is a (large!) buffer sitting between the music player and the sound card, controlled by the OS. This is what iTunes writes to, and what the sound card reads from. As long as the music player keeps ahead of where the sound card is reading, the data will get transferred to the sound cards without errors. If that doesn't happen, what happens specifically varies on the sound card implementation, but it is definitely a "failure" more than it is "distortion". Either you get a short blip of silence or the last fragment of sound gets replayed continuously until the buffer is filled. This OS buffer is IIRC what you're controlling in ASIO - but it's also what is controlled in foobar2000, where it can reach several seconds in length (!). The music player keeps this buffer filled whenever it runs on the CPU. Under normal computer operation, applications typically get a chance to execute extremely often - perhaps 10 to 100 times a second. See what I mean by a "big" buffer? So, if you have a 1 second sound buffer (typical on foobar2000), the ONLY thing the sound card requires, besides access to the OS buffer, is that foobar2000 runs more often than once a second, filling the buffer each time. *No* other activity on the CPU matters as far as how the data gets to the sound card's output. I think the RTOS discussion is somewhat immaterial to the issue you are describing. If I understand you correctly, you seem to be expecting audio to be output a sample at a time from the music player to the sound card. As I mentioned before, it doesn't actually work that way, but you could actually do this with an RTOS running on modern hardware. It's still a terrible idea. If you really design a music player to output a sample at a time to a sound card, you need the music player to output to the sound card 44,100 times a second - and the sample output would be timed in software rather than to a crystal. And you will *never* *ever* get a software loop to have lower jitter than a crystal. *pant*
  11. That's funny. I don't recall anybody "proving" anything of the sort about Meyer/Moran. How do you know that the testers were "overwhelmed"? Or that being attuned to the sonic signature of the systems is a requirement for sensitive critical listening? While my ears certainly acclimate to the particular sounds of speakers/headphones over time, so that response deviations are not as noticeable, I've been able to listen just as critically in the first few minutes of listening to a system I've never heard of before, compared to my 5-year-old system. Similarly, some of my most absolutely sensitive listening moments have occurred to music that I've never listened to. If any difference is to be found, I would argue the opposite - that familiarity, in some cases, breeds complacency. On another note.... Think about what you're implying here. If tests require the subjects to "know the sonic signature in advance", and not doing this can compromise an evaluation of high res... Steve's RMAF invitation should be meaningless, right? I've never heard his system. So I could never hope to be able to hear a difference. I have no reason to believe that I would become any more acclimated to his system than the Meyer/Moran testers were to their tested systems. (Most of those systems, IIRC, were personal rigs and many of the testers were very well acclimated to them.) If what you're saying is correct, dealers can throw away their SACDs and DVD-As and just stick with CDs (of the same masters) for their demos, and as long as the listener does not "know the sonic signature", it will sound just as good. Right? Or are you saying that, if a listener *does* hear a difference, it doesn't matter whether the listener has acclimated to the system or not? Because under this criterion, I should dismiss his evaluation just as much as somebody who can't hear a difference. Finally: Even if the listeners knew the sonic signature of the system, would you still accept the authority of a comprehensive ABX test of such a subtle difference (like high res or ALAC/AIFF), if it turned out negative?
  12. "Absent universal truths, it's really all just opinions, some of which are more popularly held, to be sure." Absolutely not! That is PRECISELY the wrong way to interpret what I am saying. There *are* absolute truths to be found here. There is one right answer to this problem, and a lot of wrong ones. Just because the sociology of science is believed to have a significant subjective component to it, does not mean that the conclusions reached by science are subjective or relativist - and the same applies for audio. Even though competing theories can (usually) explain all the facts in an internally consistent way, that doesn't mean that ANY of the theories can be considered "correct". In some situations no presented theory could be "correct". And by and large, a paramount reliance on personal perception - and not challenging the personal opinions of others when you believe they are wrong - *will* steer you to one of those "wrong" theories. My point is, if you're looking for explanations for evidence, everybody's going to give you that. Gordon, Steve and I will all be able to explain all this evidence in our own theories. You need to look past the logical coherence of the theory to find the truth. In lieu of a more detailed explanation of what that actually means, I'd like to just say common sense is 90% of it, but I'm not sure I can get away that cleanly... I'm not pulling this stuff out of my butt - it is a pretty straightforward restatement of basic Kuhnian principles (although I am using "theory" when I really should say "paradigm", but that is not a word that is good to throw around lightly.) Thomas Kuhn argued close to the same thing - that scientific principles now considered laughably wrong were *not* thrown out because they were necessarily "wrong", and in fact could have been extended up to account for the present day - but he sure as hell didn't see science as "just opinions". That said, my education on the matter more or less ended with Kuhn (plus a lot of additional comments by the professor who instructed me in the history of science) - there has been a tremendous amount of work done since then that I do need to catch up on. But my understanding is that Kuhn's fundamental premise of how theories/paradigms are selected is pretty sound, and that's what I'm trying to convey here. Read that Langmuir paper I linked, especially the "Symptoms" section. Its relevance to this current discussion should be obvious. (FWIW thanks to krab for catching this.)
  13. I've gotta get back to work and stuff, but as a parting shot, I'd like to offer a link to Nobel laureate Irving Langmuir's talk on "pathological science" in 1953. I'm sure very few of you have read it, but all of you should. It has nothing to do with ABX testing specifically, but has a lot to do with the nature of critical observation. Examples like these are the reason why mainstream science is so hard-assed about statistical significance nowadays - especially in the presence of subtle or hard-to-explain effects! http://www.cs.princeton.edu/~ken/Langmuir/langmuir.htm
  14. "And agree here as well. And therein lies one of the problems I have with ABX. People performing ABX tests - and relying on them to support their opinions of what can and cannot be discerned - seem to tend to accept them as universal truths, by that I mean, they seem to argue that their testing proves their point AND also disproves any dissenting point. This is not the case. You cannot take probability-based mathematics and use it to prove that other outcomes are NOT possible. And therein lies the biggest rub - when ABX testing is used in an attempt to disprove something. AFAIC, it can only truly be used effectively to prove that a difference can exist." Quite true. But - and you may think I'm crazy when I say this, but it's the truth - all science is like that. No scientific theories are ever truly "disproven". It's just that enough people throw their support to the new theory that the old one is cast aside. (Or, more commonly: the old theory is only "disproven" when all the scientists who believe in it die off!) IMHO, it depends a lot on the specifics of the test, but generally, it takes a lot of rose-colored glass to take a set of failed ABX tests from multiple listeners and claim that a difference may still exist (or definitely exists). In statistical parlance, I think type II error is often overestimated. Like I said earlier, this is somewhat justifiable in the case of a single listener and a single test, for all sorts of reasons. But - especially in the really colossal tests like Meyer/Moran and the many recent power cable blind tests - it's really easy for me to look at those results, and just make a judgment call, and given the amount of effort and listening involved compared to my own, I think I'm justified. Again: this is a personal opinion. Coming from a different angle, regarding repeated failures of tests: at some point I have to wonder: is it worth it? I can rattle off a few distortions that I can ABX but I don't give a damn about. Once an effect is pushed down to that level of relative inaudibility, I'm not sure it even matters to me whether it's truly inaudible or not. I just don't care. Until I see comprehensive proof that informs me that I should care more about it, I'm going to ignore it, blissfully.
  15. Thanks for the offer. I could do that, and I could also perceive a difference, and then I can then happily say that I do not believe my perception on the matter That said, I have a really good track record for my sighted listening tests coming up with null results, and I have discovered distortions entirely by accident at times. I have good enough hearing (and trust that I have enough of a critical ear) to know that I'm not just deaf or prejudiced.
×
×
  • Create New...