Jump to content

dca58

  • Posts

    19
  • Joined

  • Last visited

  • Country

    Belgium

Retained

  • Member Title
    Newbie
  1. I'm puzzled as I thought all these options were only meaningful when one did upsample. Am I wrong ?
  2. That's what I said, among others it lacks the GUI without which you can't run iTunes.
  3. Darwin is (or was) the core of OSX, which is thus built on top of Darwin and add things like the GUI which is obviously required by iTunes. It is possible when installing OSX to choose a custom installation instead of the default one and avoid to install superfluous things like printer drivers and foreign language translations. That's quite handy to save disk space.
  4. I'm sure you know what I meant, i.e. that the data is sent from the input buffer to the DAC using a clock which is not locked to the the input signal using a PLL. That's what makes it asynchronous after all... And I have no doubt that you do hear differences, what I tried to express is that there are many combinations of software settings and most people do not use A+ in "bit-perfect" mode because they use the (excellent) volume control or oversampling and that makes difficult to assert that the bit stream produced is exactly the same for all versions, say, because of a bug, a different internal parameter or whatever. If the point above does not apply, the other fact is that some dacs are probably more sensitive than others to the quality of the input signal (shape and level). I don't want to start a debate on which is a better but I'd think that from an engineering point of view the ideal DAC should be completely immune the quality of the input signal and of the power line but I know that such an ideal DAC does not exist.
  5. I don't agree with you on this one. The way I understand it is that the data, once in the DAC's input buffer is clocked to the DAC itself with it's own clock, completely independently of the incoming signal. It is thus immune to the shape of the incoming signal as it doesn't need to sync with the computer with a PLL or something like that. Yes, I know that, provided the volume is not touched, provided there is no oversampling (otherwise I presume that can be some interpolation going on) and provided there are no bugs... Don't worry, I still enjoy the music. My point is precisely that it is quite possible that it's not the same bits as you put it. All software have bugs and I wonder how one can assert that's it's impossible that A+ may introduce one once in a while. But obviously I may be wrong, I think I heard small differences between releases but they were small to the point I cannot be 100% sure there were any, but I'll listen more music this week end. But then I use async USB and my DAC has a fairly good, regulated power supply, that may also explain why things are not that obvious here. And I'm not saying that it's a better DAC in any way, but if the real cause of the difference has something to do with small fluctuations of the power supply or something like that rather then with a plain bug, then some DACs are probably less sensitive than others.
  6. Thanks for the article. Presumably the impact of the cpu load on the voltage of the usb wire should be quite small, but more importantly should have no impact at all if an asynchronous USB mode is used to transfer the data. Am I correct ? If I turn off oversampling the data is sent at 44 kHz but still as 24 bits even though the original records is only 16 bit, am I wrong ? Anyway, the fact is that I do not want to turn off oversampling as it sounds better when switched on... So I should probably rephrase my question as, is it really not possible that some versions of A+ do not send the same bit stream, even when all settings remain the same ? It would find that explanation more reassuring than attributing all of the sound differences so many people hear to something that should have only a very limited impact at best, something that only golden ears are supposed to detect.
  7. It means that I assume that the software cannot change characteristics like the rising time of the spdif signal for instance. But I guess what I really wanted to ask is, does anybody have an idea of why the sound is slightly different between the different versions. The obvious answer would be that they sound different because the data, the sample values, sent to the DAC are not the same, hence not bit perfect, but is that the case ? Anyway, I installed the 1.5.4 version yesterday afternoon and it does sound very good indeed...
  8. In a sense, the situation with A+ is even more surprising, to the point where one can wonder if the same signal is really sent to the DAC by different versions ? I'll make a broad assumption but I assume that A+ cannot modify things like rising and falling edges of the bitstream, whether it is sent over optical fiber or USB or whatever, i.e. these low level things depend only on the electrical devices of the Mac and the DAC. The two other things that can affect the sound are the timing and obviously the data sent. The timing aspect can (mostly ?) be taken care of by using async USB over other synchronous methods. So the thing that remains is the data itself that is sent to the DAC and it cannot be "bit perfect" if it changes from one release to the next... Am I forgetting something else ? Note that I'm not asserting anything but I've not yet read the start of a hint of what is really changed in terms of signal sent to the DAC by the various versions... I'm not in the camp of all bit-perfect players sound the same but at least I believe that a given dac will produce twice the same sound when fed twice with exactly the same signal (well, ok, provided someone did not switched on the fridge in the meantime) ;-)
  9. It's a multi dimensional thing, what you gain in one dimension you may loose in another and it's difficult to compare what you loose and what you gain. Like, better imaging but something else is worse or not as good. Different people would give different weights to those dimensions, e.g. if my ear is not as good as yours in the higher frequencies a little bit of distortion there would affect me less than you. Or people with different mother tongues hear different things, like japanese people having difficulties to tell apart an R and a L as spoken by french people (and the converse is also obviously true). Beside, even for a single person, you may prefer an amplifier on a given system but the day you change something else, e.g. speakers, the combination of the amplifier that you thought you liked less may actually sound better to you with your newer speaker than your "preferred" amplifier with your old speakers. In that sense you cannot select, say a s/w setting as your preferred one, then a cable, then a dac, then something else and hope to have the best system. Doing your selection in a different order you might have preferred a completely different system (globally better, or not...) which is what I understood when I read : "To me, even the smallest change (be it down to an individual resistor, position of a speaker, choice of cable, room treatment, or s/w setting/version) either brings my system closer to musical reality or further away. It is either better or worse". Good night...
  10. This is not arrogant to me but I think it is simply not correct... The problem stems from the fact that it is not possible to assert that a change brings the sound closer to the truth, it changes the sound but there does not exist a "metric" to decide that the resulting sound is closer or further to the truth. To give an obvious example for amplifiers, some amplifiers have more distortion than others, but the distribution of harmonics is also different. Which amplifier is closer to the truth ? The ones with less THD or the ones with less odd harmonics ? How can you say everybody would agree ? There does not exist a mathematical model of the sound as perceived by the ear and brain and it is quite likely that opinions will differ.
  11. On the other hand you could also wait a couple of days before updating to the latest iTunes version, just enough time to see if there are any problems with your combination of software. There are probably good reasons to prefer J-River but I wouldn't bet too much on the fact that J-River will never have a few bugs on its own ?
  12. I switched to the latest beta when I updated to iTunes 11.0,3. The latest beta works like a charm for me, no drawback that I can see compared to the stable version.
  13. Sorry, I know it was not clear but my reply was intended to the person who asked : "I am using iTunes primarily as a library manager to play files with Audirvana Plus in integrated mode: should I update iTunes to 11.0.3?" I use 11.0.3 in this very same context and I have no problem.
  14. I'm using A+ (latest beta), I updated and I had zero problem. Sorry to disagree guys but to me this version is one of the best...
×
×
  • Create New...