Jump to content

easternlethal

  • Posts

    42
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Freshman Member
  1. Mark - i think your point is understood already and i and others have already written what we think so let's just leave it there otherwise we'll just end up going round another circle for a few more pages. I have no issues with you or anyone taking this view and think it's healthy. Peter - somehow i just knew my comments would provoke a response from you. :-) You are right. I could have been speaking off xxhighend. Or cplay. Notice I did not pass any comments about whether or not it was unique.
  2. tdm telephony tries but gets nowhere close and in itself will never result in a perfect replicate except in theory when you don't factor in noise. Anyway you might find it more interesting and that probably explains why you're happy to assume there is no difference, but the only real way to be sure in the end is not to rule it out until proven otherwise (at least in theory). In addition to this approach i also have some blind test results there is a difference so if i were to put myslef in the shoes of a scientist I personally would not be able to objectively discard it for these reasons. This doesn't advance the cause against 'snake oil' but i think it does sort of demonstrate why there is so much perceived n this industry.
  3. Of course we wouldn't be able to transfer digital if we didn't know how to multiplex. Same for fm synthesis. I take your point it has been done before but that doesn't mean we know how to do it well. At the same level what we are doing is also 'analogue' in nature because we're trying to recreate an analogue signal using mainly analogue components (most of us anyway). But that is not what we are discussing either. We're discussing the actual process of converting digital samples to an analogue waveform that replicates as closely as possible the original recorded waveform. And knowing just how to transfer a telephone call along a copper line or understanding how to modulate a signal is not enough. I think of it as a bit like brownian motion, where we understand what happens generally but we cannot predict where each molecule will end up because there are too many variables affecting it. The problem is we cannot predict the behaviour of noise yet. What jplay does is well understood. It mainly just messes around with the way data is presented to the outboard controller. But what effect this has is not well understood, or at least cannot be predicted with certainty because we cannot predict noise (except for maybe what is deterministic) and even where we can we don't know how to isolate and remove it. So the general hypothesis is by presenting data in a certain way it results in less noise. In order for it to be disproved one would have to show that presenting the data in that way has zero impact on noise, which i don't think has been done scientifically so we have to fallback on our own subjective test environment. Does this mean there's no merit in it? I don't think so.
  4. Obviously 'digital' systems have been around for a long time. Anything with a binary circuit can be considered digital. But we're talking about digital signal processing here which is what fm synthesis is all about (it's not possible to de-link them) and this dsp is the bit i'm saying is not well understood - despite the fact that what is or is not digital is probably better understood. There is still a lot research occurring regarding the effects of how systems are affected depending on how digital codes are pulsed from one place to another and to me jplay is just an application which does that. To me it is not violating any known principal or applying something that has already been done or incorrectly so it is not 'disregarding' what has come before it.
  5. I understand you respect the amount of research that has happened until now but until we can recreate a whole orchestra in our living room i would say it is not enough.
  6. In that context i consider jplay to be an experimental approach which might lead to our greater understanding or we may discover something that renders it useless. Either way i don't think anyone is under the impression it's a silver bullet but anything that advances a new hypothesis should be commended otherwise we'd still be watching silent black and white shows.
  7. Even if digital audio had been invented 100 years ago I think it still doesn't change the fact that we actually know very little about it. This is partly because we don't really understand how our brains actually impose order and sequence into what is essentially just a collection of sounds but also because we don't understand the digital process as well. Computers and synthesizers have been around for ages but the early ones are not digital. Digital synthesis was developed in the 70s and commercialised in the 80's with the Yamaha dx7 (any musicians here remember that?). It was like the future had arrived when that came out. And people thought we were all well on the way to mimicking real instruments perfectly. How wrong was that.
  8. I've always wondered why DAC manufacturers don't just claim that as long as you have a pc you won't need to buy any other source ever again because PCs are 'bit-perfect' and with a good DAC your source will be perfect (so save your money on the CD or turntable and put it into the dac). Well... I just came across one that does.. Playback Design! http://www.ultrahighendreview.com/interview-with-andreas-koch-of-playback-designs/ Firstly I have to say I agree with their approach which is to focus on the clock (instead of isolation, which although important is not as important) and secondly their method of differencing the output signal with a jittered signal to remove jitter is just..very, very cool in principle. Because of this Andreas Koch says: ".. the Playback Designs product line can be fed by any digital source including a PC, an inexpensive Discman, a DVD player, or high-end CD transport and none of them seem to make a difference on the sonic performance of the analog output signal". He calls this technique his "Frequency Arrival System". Something like this would not only need to be able to measure but also generate jitter. But unfortunately he is not willing to disclose how it works. *sigh*
  9. I just skimmed through the 'white paper' on the muse. The design seems to be basically a fanless pc that is linked to a usb-spdif converter (modified m2tech). I do not see any mechanical or electrical isolation or power treatment. There is some mention of 'proprietary drivers' but I can't figure out what it is driving exactly. Right now I am hard pressed to find anything that is better in this machine than a normal properly configured pc. But I could be missing something..
  10. I am skeptical about coaxial designs because of distortion. To me it is always better to have multiple drivers covering different frequency ranges placed strategically on different axes (just like an orchestra). Cabasse makes a big deal out of it but I don't really like them either.
  11. I have heard the tannoy 'kingdom' or whatever it's called, driven by FM Acoustic as well as Accuphase. They are actually quite similar in construction to magicos because they both use an aluminium frame. But I agree they are better than Magicos because of their superior driver. They will always suffer from the problems associated with ported boxes though. There is no way 3 or 4 drivers will be able to produce the sound of a full orchestra. Magico attempts to address this by having no ports and a special baffle (similar to egg shape designs) but i don't think that goes far enough (wilson also tried). The best sepakers of this design i have heard are jbl 4350s.
  12. Frequency null tests are used a lot - but not to measure differences in the time domain. Also by measuring at the DAC output stage, the results are severely compromised by the DAC's own interpolation. Say the experiment went a different way instead and you obtained a large difference in measurements. You still wouldn't know whether it comes from the DAC or the PC and more importantly you wouldn't know whether the source was bit-perfect or not. I think there are 2 issues being discussed. 1 is whether it is bit perfect, and 2 is whether it produces a better sound and one does not automatically lead from the other. You can have bit perfect output from the pc and get high bit error in the transfer process and still get bad sound and you can have non bit perfect output and low bit error rates in the transfer and get good sound. I used to be in the pro audio world as well (still go back occasionally) and your description of their set up is correct. But it is because of it that a lot of mastering engineers actually end up destroying sound quality so they use more dsps to make up for it (eq, compression and so on) in an infinite process, and that explains why so many modern recordings sound so terrible.
  13. and similarly correlation (or lack of) does not imply causation (or lack of).
  14. I agreed with that. EM affects jitter. But what I'm saying is, there are side effects if you try to minimise it via the spread spectrum. All very interesting (and wonderful) to have all this information freely available in Wikipedia.
  15. Sorry, I forgot to mention. In the test the diffmaker did actually pick up 'sample rate drift' (i.e. jitter) at -90db (I knew I read it somewhere). In the 'amplitude' domain this is not audible. But in the 'time' domain, I think this might be audible is we're talking more than 15 microseconds. Not sure whether diffmaker measures that (unless I misunderstood the diffmakers differencing process - it only measure differences in amplitude and frequency). In their white paper they actually classify rate differences as 'uninteresting' and say it equalizes 'linear' changes in frequency responses. I think that throws the baby out with the bathwater a bit..
×
×
  • Create New...