Jump to content

sone

  • Posts

    47
  • Joined

  • Last visited

  • Country

    United States

Retained

  • Member Title
    Freshman Member
  1. Closer to 99.99%. I see –35 to almost –40 dBFS on the differentials of the 256 kbps AAC tracks I've examined (relative to 0 dBFS originals), and the RMS levels run 3 to 6 dB below that, as expected from the psycho-acoustic based algorithms. Try it yourself with the commands I posted above along with normalize: normalize --no-adjust track_orig.wav normalize --peak track_orig.wav
  2. I'm saying that I could NOT distinguish between AAC 256 and WAV for the tests I conducted while deciding on a format, and if anything this method will underestimate the differences. The Python script simply computes A–B, so there is absolutely nothing subjective about the result, other than whether you find it interesting or not. The actual test is whether A sounds different from B.
  3. I really, really hate MP3 because of just just how truly awful it sounds at lower bit rates. Hate it hate it hate it. I'm afraid to double-blind myself for 320 kbps MP3 because I may not be able to distinguish it for some tracks. But it doesn't matter—if I know it's an MP3, I hate it. I know this is entirely subjective, and if you tell me this, I will simply agree then go off again about how much I hate MP3. But AAC at 256 kbps sounds really good to me, and I've tried informal tests in which I cannot distinguish the difference between AA 256 kbps and 44.1/16, both reconverted to PCM and played on a high quality transport. YMMV using different playback mechanisms. But for convenience across a music server, mobile devices, and managing disk and backup sizes, I mostly use 256 kbps AAC on the computer. Here's a few OS X and Python commands to compare AAC to WAV, along with a Python script that produces a WAV file out of the difference between the two and show you what the psychoacoustic algorithms are masking out. At 256 kbps, you'll have to turn up the volume to hear this difference clearly, but it's interesting to listen to. Edit the script to see the very large difference between AAC 128 kbps (iTunes standard) and AAC 256 kbps. Use cdparanoia to rip a wav file, then run these Terminal commands: afconvert --file m4af --data aac --bitrate 256000 --quality 127 --src-quality 127 --verbose --strategy 2 track_orig.wav track_256kbps.m4a afconvert --file WAVE --data LEI16@44100 --verbose track_256kbps.m4a track_256kbps.wav afconvert --file m4af --data aac --bitrate 128000 --quality 127 --src-quality 127 --verbose --strategy 2 track_orig.wav track_128kbps.m4a afconvert --file WAVE --data LEI16@44100 --verbose track_128kbps.m4a track_128kbps.wav afconvert --file m4af --data aac --bitrate 64000 --quality 127 --src-quality 127 --verbose --strategy 2 track_orig.wav track_64kbps.m4a afconvert --file WAVE --data LEI16@44100 --verbose track_64kbps.m4a track_128kbps.wav afconvert --file m4af --data alac --quality 127 --src-quality 127 --verbose track_orig.wav track_alac.m4a afconvert --file WAVE --data LEI16@44100 --verbose track_alac.m4a track_alac.wav This gives the wav track_orig.wav, files track_256kbps.wav, track_128kbps.wav, track_64kbps.wav, and track_alac.wav (note -- 'diff track_orig.wav track_alac.wav' should yield no difference), which you can burn DAO to a CD and do your own listening tests using a good transport and DAC. It's also easy to listen to the residuals using a Python script that differences the samples in these WAV files from the original: wavdiff.py #!/usr/bin/env python import wave,struct # open the original, codec, and output difference files orig = wave.open('track_orig.wav','r') codec = wave.open('track_256kbps.wav','r') diff = wave.open('track_diff256.wav','w') diff.setparams(orig.getparams()) SHRT_MAX = 2**15-1 SHRT_MIN = -2**15 # process over a blocksize of bs samples nsamps = orig.getnframes() bs = 1024*1024 bs = min(bs,nsamps) while True: osamps = struct.unpack("<%dh" % (bs*orig.getnchannels()),orig.readframes(bs)) csamps = struct.unpack("<%dh" % (bs*orig.getnchannels()),codec.readframes(bs)) samps = tuple(map(lambda x, y: x - y, csamps, osamps)) samps = tuple(map(lambda x: min(SHRT_MAX,max(SHRT_MIN,x)), samps)) diff.writeframes(struct.pack("<%dh" % len(samps),*samps)) nsamps -= bs bs = min(bs,nsamps) if (bs <= 0): break orig.close() codec.close() diff.close()
  4. Thanks for the youtube link. I love my RR CDs. But in spite of my respect and appreciation for Keith Johnson's work, the Gaussian jitter values he shows at 32:18 are an order of magnitude below published thresholds for random jitter detection: There are very good reasons to remain skeptical about this particular claim, and that's ignoring the contradiction that people who bought into this rumor said it improved the sound. I still love RR, though -- buy their stuff.
  5. Okay, how about a friendly discussion about objective versus subjective relevant to the OP's question, perhaps with a little painting outside the lines. Objectively, I'm happy to entertain any physics or information theory-based arguments, and because this is informal, we don't have to provide journal-quality experiments, but there has to be something more than "I hear a difference." More like, "I tried to conduct a handful of fair tests and consistently hear a difference 5/5 times." Or 9/11 times. Or something that produces a basic significance level to make sure that we're not fooling ourselves, which is very easy for anyone to do. Extraordinary claims demand extraordinary evidence and all that, and even over cocktails we need some kind of compelling evidence before any such claim can be taken seriously. Subjectively, I know what extraordinary music reproduction sounds like, am amazed when I hear it, and happy that the technology exists to allow me to enjoy it in my own home. I have a deep appreciation for the artists—musicians, sound engineers, and electronics designers—that make this happen. And making this happen requires a good ear capable of identifying any weaknesses in the audio chain so that these weaknesses can be attacked and corrected. Great music is part subjective and part objective, but a lot more subjective. So all credit to people like Barry Diament who do this. And not to single out Barry, but do his superlative abilities give him the power to perform the impossible? Like it or not, objective trumps subjective. And there is no shortage of examples of remarkably accomplished people who make basic or large mistakes, even in their own areas of expertise. Again, not singling out Barry, but hypothetically if he made a mistake he wouldn't be the first. That's why it's crucial to go by evidence, not authority. That's all I'be been saying -- where's the evidence or logic supporting what we both agree is an absurd effect? A much simpler explanation is that the person -- whoever they are -- is simply mistaken. It's okay to make mistakes. Now about the green markers, which counts as painting outside the lines (get it!). Whenever my mom emails some internet rumor to me, the first thing I do is email back the Snopes link debunking it. So here it is: Claim: Coating the edges of a compact disc with a green marking pen will noticeably improve its sound quality. Status: False. I honestly have never conducted a listening test on marked-up CDs, so just count me as deeply skeptical but open to actual evidence countering all the citations on Snopes. But really. Snopes. Think about it. Can you understand why it's even more difficult to accept claims about lossless audio for which I have conducted tests? And for which the physical playback is highly deterministic, very well understood, and easy to test with basic unix commands?
  6. CA: Have you posted rules or guidelines for this forum? Having looked, I don't see anything on your faq page or elsewhere. With respect, I don't understand how any meaningful dialog is possible if comment on the experiences of others is considered impolite, whether shared or not. Is it the policy of your forum that all experiences and viewpoints are equally valid, and all to be shielded from comment? And again with respect, let's agree to disagree about whether alluding to a poster or other group of people as gasbags is professional or not. What about repeatedly calling people jackasses, which occurs two posts before yours? Is this considered professional? What about references to anti-depressants as a response? Is this professional behavior? If so, why has my post today responding to the reference to SSRIs been deleted? I maintain I that responded to this vague slur with restrained civility and professionalism. Do you know how this post was deleted? Please don't conflate my love of audio with my right to defend myself against personal insults made by a minority of people on this forum. I would honestly like to understand your guidelines about moderating this forum before I decide to participate any further.
  7. With respect, I have been consistently polite but direct. If the forum moderator singles me out and criticizes my tone while ignoring multiple instances of actual abusive language and personal insults, including "gasbag", "jackass", "tunnel-vision", and "vague statements about anti-depressant medication", then I will take my leave. For now I'll give benefit of the doubt for not having read through the entire long thread, but I stand by my comments.
  8. Because you told us that audio files with the same checksum sound different to you. If you are using the MD5 hash as is common and used above in this thread, then one of the following must be true: There's a random MD5 collision with the two audio files with chance one in 340 billion billion billion billion. Your belief in either your hearing abilities or the rigor of the test is mistaken. Because #1 is very, very, very unlikely and two audio files with the same MD5 hash cannot sound anything like each other, that leaves us to conclude that #2 is true. Perhaps you are unaware of how unreasonable it sounds to claim without evidence that the exact same files sound different, then using name-calling, ad hominem attacks, and tone trolling as a response to people who politely point out the facts.
  9. The chance that two audio files have the same hash is astronomically large, strongly suggesting that even though you believe you are able to hear a difference between two files with the same hash, this belief should not be expected to withstand a rigorous test. Assuming you used the 128-bit MD5 hash function, there's a 1-in-2^128 == 1 in 340 billion billion billion billion chance of a random MD5 collision, very roughly the number of particles in the Milky Way galaxy. It is much more likely that your belief is mistaken. To compare local files don't use hash functions (colloquially == checksums in this discussion) -- use "diff" in unix/cygwin. Hash functions are useful for providing cryptographically secure signatures for nonlocal data comparison, such as making sure that executable downloads really are the same as on a remote server. Cryptographically, MD5 is broken and cannot be trusted when secure comparisons are necessary (use SHA-3 for security). In fact, the OS X Flashback virus used a forged SSL certificate using an MD5 attack with identical MD5 hashes, and it is now easy to generate a different audio file with the same MD5 signature as an original. However, the audio of the new file would be stunningly awful and sound nothing at all like the original. So while it is technically possible to generate two audio files with the same MD5 hash, it is not possible for these files to sound anything like each other. And it is trivial to simply compare all the bytes using a command like diff, making this entire discussion about hash functions moot.
  10. What is your reason for being here and acting this way? In my short time here you've called me a gasbag and a jackass in response to my answers to the OP's question. I'd say you're engaging in name-calling and ad hominems because you know that your ridiculous claims are empty.
  11. In how many of your 5k posts have you supported your testimonials with real evidence? None in the ones I've read, which have been primarily your transparent passive-aggressive schtick.
  12. See xkcd's skein collision competition. Don't bother using hashes in most cases anyway—just use diff.
×
×
  • Create New...