Jump to content

charlesp210

  • Posts

    5
  • Joined

  • Last visited

  • Country

    United States

Retained

  • Member Title
    Newbie
  1. Thanks for these nice pictures! I wouldn't call them pretty, however. I have found the basic answers to the main question I was ...trying... to ask. Thanks for your patience. The answer I was looking for is this: the noise shaping is baked into the bitstream by the delta sigma analog to digital converter. So you could have different kinds of 1-bit bitstreams, just as I was thinking, depending on the order of the noise shaping. You could have a 1-bit bitstream with NO noise shaping, 1st order, 2nd order, etc. Further it appears that the bitstream represents not just a particular bit depth and order of noise shaping, it may also represent specific parameter choices for each of the stages involved. Then once you have created a bitstream with N order noise shaping, wouldn't it be required to decode it back to analog with the same N? And not only the same N, but the same (or at least compatible) parameter choices?
  2. Bitstreams representing what? They are not bricks of electrons. They are bitstreams representing the output or input of a seventh order noise shaper. A such each "bit" is actually applied over an extended period of time, not 1/64fs. The Wikipedia doesn't actually give any details of the required noise shaper whatsoever. Is the seventh order noise shaper only part of SACD and not DSD64? It seems to me that DSD64 cannot possibly work without a very aggressive noise shaping technique. Perhaps it is not an explicit feedback loop as described by this article: https://en.wikipedia.org/wiki/Noise_shaping None of you are addressing this. Plain Old 1-bit Sigma Delta Modulation a 2.8Mhz cannot possibly produce low enough noise, high enough slew rate, or anything to meet high fidelity requirements. Noise shaping is absolutely required. This is the magic and also the downfall of the system IMO. Noise shaping is time smearing feedback, or something equivalent to it. Noise shaping is fundamentally different than ordinary filtering. DSD produces an entirely different and coarser kind of artifacts and noise than PCM. The noise in DSD is not amplitude gaussian noise, it is the phase noise of the modulator/noise shaping, which illustrates what I am calling the warping of time in one way. It basically follows from the fact that the system is Always in overload state, but sometimes it is in more of an overload state than others (1st order, 2nd order, each order has its own overload state). As I imagine it. Please show me the details of 7th order noise shaping used in DSD, not more stuff about DSM.
  3. Thank you! However, my question was specifically with regards to DSD, which I would understand as Sony proprietary. And it was my understanding in the early days anyway, not so sure now, that it was also not public in any way. In order to get the story on DSD, what specific technique(s) must be used or not, one had to be a Sony licensee, and many early technical commenters, in early AES papers, could not comment accurately because they did not understand the required noise shaping, which itself might be "adaptive" in some way, truley clever perhaps and inimitable wihout similarly intense effort. My feeling is nobody can really comment well without really knowing what is going on. And for that matter, I would recommend that nobody use DSD, as I an philosophically opposed to closed things that I am not allowed to understand (let alone play with, manipulate, etc., but I'll have more to say about that, I hope). At this time PCM systems are entirely open, very well understood, and easily manipulated by end users. But I won't let lack of knowledge inhibit me in making some interesting speculations, which you have prompted me to describe. And I'll concede I don't understand even the general principles of SDM itself well enough, let alone the secret mysteries of DSD. But firstly I would prefer for starters not to think of the final conversion to analog as being what distinguishes what I consider the true PCM from not PCM. PCM, as DSD, is fundamentally a coding format, a way of transmitting information, or storing it, etc. Thinking about the conversions are the next step. The realization to analog could take many different means, SDM, or whatever. I prefer real PCM end-to-end with successive approximation converters (such as the PMI units, still among my favorites) and flash converters, my favorite being PCM 1704. So I reeled in horror seeing Sony's first marketing abolishing both my favorite kind of ADC and my favorite kind of DAC in one blow, and establishing a trend that non-DSD capable DAC's could not be sold. Please bear with me, and I'll explain why I think as I do. First start with a naive view of DSM and DSD in particular I've seen on this blog and confess I still harbored until digesting this blog a little. In this naive view DSD can't respond to steep transients because it can only go up or down one LSB in each time interval. The reality is quite different, we are shown without explanation DSD64 even producing picture perfect impulses, no visible slewing at all. Aha, this is the magic of "noise shaping." And also DSD isn't a set of bricks that can be added to or subtracted from, as in the naive view. Instead, it is a series of deltas feed to the algorithmic feedback loop known as the noise shaper. So it isn't like we're dealing with fixed bricks anymore. A straight sequence of 1's isn't going to slew gradually. The noise shaping feedback loop is going to reinforce the upward movement so we get a straight up transient. Miraculous! But when you see such miracles it is often a good idea to look under the hood or in this case think (or speculate, as I have no real information about DSD noise shaping). Now that we have applied our deltas to create the straight up transient, we have a little problem. We have used up the deltas that might have covered a period of time just to better describe one instant of time. So now we have fewer deltas to describe the time interval. Here we have a decision to make. Either we can reduce the information content of the post-transient interval, sort of like lossy compression. Or we can warp the time a little. An adaptive algorithm could account for both cases when they seemed to be less audibly dangerous. Now PCM isn't natively inclined to such time warping or information level discontinuity. PCM realized end-to-end without DSM is I still believe the best way, because it uniquely represents no warping of time or information density. DSM is dishonest in it's use of time. But you could argue, DSM did not take over simply because of Sony marketing and industrial muscle. DSM took over because it was cheap, and because it was and apparently (but I believe dishonestly) remains the best way to get to perfect amplitude linearity, or as close as possible. But the piper to be paid for the perfected linearity is the messed up time. To my ears, PCM end-to-end has unmatched pace and rhythm. The linearity, done well, is good enough.
  4. I've learned a lot just by reading a tiny bit of this discussion. Thanks! I have some questions about DSD. Are the parameters of the noise shaping algorithm freely available to the public, or proprietary and trade secret, or patented? I haven't been able to find any details of the noise shaping algorithm online. Is there a different noise shaping algorithm for DSD128 vs DSD64, etc? Do you have to be licensed by Sony to get the details? It seems to me that while there is only one canonical PCM, there are an infinite number of possible noise shaping algorithms. While DSD64 is said to use a "7th order" noise shaper, this is not a complete description because each "order" has it's own set of parameters. Are these parameters something that can be freely chosen to sound best, or is there some principle that determines how they need to be set. With regard to the canonical PCM, in practice PCM uses oversampling and there are an infinite number of possible oversampling filters. But you have to go the the next level and there are also an infinite number of possible SDM realizations of oversampled PCM, and 1 possible flash realization. With regards to DSD64, it seems to me the canonical implemention is the 1-bit implementation without oversampling. And then there is oversampled DSD. And then oversampled DSD64 can be realized by an infinite number of SDM realizations. The first implementations of DSD in consumer products (e.g. SCD-1) used 1-bit converters, but they might have been internally oversampled as well. I also have not been able to find details about those 1-bit converters, but I believe they ran far higher than 64fs. Has anyone gone beyond the definition of DSD64, DSD128, etc., and tried different versions of the noise shaping algorithms? Is everyone just assuming Sony came up with the best one?
×
×
  • Create New...