Jump to content

qwfqh

  • Posts

    4
  • Joined

  • Last visited

  • Country

    Italy

Retained

  • Member Title
    Newbie
  1. Thanks to you all for your interest! I was in no way aware that pre-emphasis had at some point in time crossed the boundaries of the analog world to take a hold into digital audio, thanks Skeptic for resolving the issue. As it seems, thumbs up for itunes and shame to dbpoweramp for not dealing properly with pre-emphasis on old CDs... Here are a few more comments: - the problem is in ripping, not in ALAC conversion - I converted the dbpoweramp-ripped track to ALAC via itunes, and found that the signal exactly matches the original; in fact, the file size of the resulting .m4a file was 37 Mb, contrasted with 33 Mb for the .m4a file obtained by directly ripping the track with itunes - I am fairly confident, and would like to reassure anyone in doubt, that the audacity spectrum analyzer does what one expects from it: the signal is subdivided into chunks of the specified size, an FFT is computed for each chunck, and the spectra of all chunks are averaged to yield a reliable estimate of the spectrum of the entire signal. This said, and sticking to the principle that trust must be earned, my very first sanity check before posting (actually, even before computing the spectra), was to have a look at the two signals at the point when the first note is struck, and realized that they _do_, in fact, differ (see attached screenshots) Thanks again for being patient and kind to a newbie in this forum
  2. Thanks for taking your time to have a look at it. Your post prompted me for some further testing, with perplexing results. As a precaution, I disinstalled and then reinstalled both itunes and audacity to their respective latest version; then, I tested: yet again my original track, that revealed the same difference in high frequency spectra a piano track and two other harpsichord tracks, that produced upon ripping with either itunes or dbpoweramp undiscernible specrta. However, none of the tracks I had at hand, even the harpsichord ones, went near to fill the high frequency spectrum up to the audible limit with a non-zero signal, as does the original track My best guess is that itunes, upon detection of a long tailed spectrum on the side of high frequencies, performs silently some kind of smooth cutoff, with the well meaning intention to suppress noise, thus filtering frequencies above 7000 Hz. So the whole matter seems of less concern than it appeared at fist sight - indeed, anything but a very subtle effect wouldn't have been spotted long ago... I keep wondering if others can actually reproduce this finding. And, of course, can't help feeling a bit uneasy about what other well meaning tweaks do happen under the clear, clean and perfectly burnished surface of itunes. Thanks again for your reply
  3. The checksums, of necessity, do not match. Both wav files tested begin with 4 seconds of silence, followed by the sharp attack of bach's chromatic fantasy. Anyway, even if there was a small time gap, my understanding is that time shifting does not affect the power spectrum of a given signal; in our case the length of the analyzed waveform is fixed (about 4 miuntes from start), and a pre-gap would indeed exclude a chunk of trailing sound from the sample with the longest initial pause, so the signals would not, indeed, match exactly - but a fraction of a second, or even a few seconds, should be of almost no consequence on the spectrum of a 4 miuntes sample. Thanks for your reply
  4. I would like to share an unexpected finding about iTunes CD ripping: any comments, scrutiny, verification and – ultimately – confirmation or disproval are very welcome. My setup is as follows: Windows 7 iTunes 11.2.2.3 dBpoweramp reference 15.1 Audacity 2.0.2 I tested track 1 of the CD described here (Release) – a solo harpsichord track very rich in high frequency harmonics. Findings Ripping with iTunes to lossless ALAC resulted in files approximately 10% smaller than ripping to the same lossless format with dBpoweramp. To find out what was going on I ripped the same track again to flac with dBpoweramp and checked that dBpoweramp ripped consistently to the same waveform, as one would expect, either using ALAC or flac formats. I then converted both the iTunes-ripped file and the dBpoweram-ripped one to wav (with dBpoweramp) and took a look at the waveforms using Audacity: the spectra (of the first 237 seconds) matched closely below 3000 Hz, but were visibly different above that threshold, with the spectrum of the ‘lossless’ waveform ripped via iTunes fainter and fainter at higher frequencies (the gap at 7000 Hz was of the order of 10 db). Comment If confirmed, this would set a huge disappointment for the quality and reliability of iTunes. While it is hoped that the losses incurred by the signal, as processed by iTunes, are based on a perceptual model and might therefore be barely audible in most practical circumstances, it would be beyond any excuses, or acceptance, that a procedure that iTunes clearly, openly and unmistakably marks as ‘lossless’, turns out as a matter of fact to be less than that. Thanks in advance for your comments
×
×
  • Create New...