Jump to content

testererr

  • Posts

    18
  • Joined

  • Last visited

  • Country

    Israel

Retained

  • Member Title
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As 2d images are not a series of edges. Still, if you want to reproduce sharp transients/edges, you need to capture those signals, even though they are not bandlimited.
  2. The transient ("dirac") being _non-bandlimited_ is why this is done. Even in theory you cannot capture a "stream of diracs" with a standard sinc kernel as it is band-limited by definition (brickwall LPF). In 2d it is capturing edges, which is a similarly-hard problem as capturing transients in music.
  3. Here's specifically consideration of standard sinc vs bsplines (mqa uses "triangle" order-1 bspline?) https://www.researchgate.net/publication/3077924_Polynomial_spline_signal_approximations_Filter_design_and_asymptotic_equivalence_with_Shannon's_sampling_theorem
  4. The whole claim of MQA capturing other audible data than standard processes lies in the idea of Finite Rate of Innovation, related to sparsity of real-world signals and field of "compressed sensing". Here is a primer: http://www.commsp.ee.ic.ac.uk/~pld/talks/EPFL13.pdf The sampling kernels on pages 11 onwards is what MQA patent is based on. Such sampling indeed captures signal in a different way, and the sample streams are not losslessly mutually convertible.
  5. It does have everything to do with "apodizing" filters and window functions. You can see various types here: https://en.wikipedia.org/wiki/Window_function#B-spline_windows
  6. Or it could be that for their new lossy compression to work, they need the DAC to be able to synthesize the non-sinc splines during reconstruction. This means that without the new DAC, the content will sound wrong. Let's look at the 2 cases: 1) original 352/24 or DSD content, resampled using the novel sampling (which retains more timing information) into a non-sinc 96/24. This 96/24 on a standard (sinc) DAC going to play MUCH WORSE than normally resampled 96/24, as the basis is wrong. It really technically needs a change in the DAC to support this sampling, similar to how you need to have DAC to specifically support DSD. Alternatively, a software decoder can convert this non-sinc 96/24 to 192/24 or 352/24 for playback on standard DAC. Even on this changed DAC, the compression from 352/24 is lossy, but they claim it will retain most of the timing data, as compared to standard resampling to 96/24. 2) original analog content, sampled using the novel sampling into 96/24. Again, on standard DAC this is not going to play well. But they claim that from the analog source it may retain timing information better than 352 or even 768KHz normal (sinc) sampling. This can probably be converted to 192/24 or 352/24 for playback on standard DAC, using a software decoder, but this conversion will be lossy, unless converted to a timing-focused format like DSD. In both cases though, there is no competition with DSD, which was timing-focused from the start. MQA can actually be a proof that the mainstream is coming to terms with the fact that standard PCM does not retain enough timing data and DSD was the right way to go
  7. People, you should at least view the first couple of slides referenced earlier: http://icms.org.uk/downloads/BtG/Dragotti.pdf It seems to bridge somewhat the consumer D/A world with some of the new developments in compressed sensing. If it does what is implied by this, then, at least for new recordings and if not compared to DSD or 768/24, it can really be beneficial. So if you get to that 96/24 from a much-better digital or analog original, you may really feel the difference, as the newly-found E/B-spline basis used may be able to retain timing data much better than standard. It loses in raw dynamic range but wins in timing precision. Moreover, this novel sampling is not the patented part, as it was developed in the open by researchers, so this technique can be used by any software, as long as there's a DAC to play it back. This means that, unless MQA walls off access to the decoding of this sampling technique, you will be able to resample any source using the novel technique and play back on the new DAC. Pitchforks are correct when talking about the restrictions of DRM and the shenanigans with legacy playback, data hiding in dither and hierarchical fidelity level shenanigans. But the novel sampling part is not patented, is based on solid foundations of compressed sensing (though CS started with random bases), has been developed openly and can be easily tested today.
  8. These slides show the theory quite clearly, if you read only one of these, read this one: http://icms.org.uk/downloads/BtG/Dragotti.pdf Bob references these papers in explaining the novel sampling process: Main: http://bigwww.epfl.ch/publications/dragotti0701.pdf Secondary: http://research-srv.microsoft.com/pubs/69188/sampling.pdf CAUSAL RECONSTRUCTION KERNELS FOR CONSISTENT SIGNAL RECOVERY | Fanny Yang - Academia.edu http://arxiv.org/pdf/0812.3066v1 http://webee.technion.ac.il/people/YoninaEldar/files/mainFormat.pdf
  9. Thinking about it some more, this "new" sampling method in principle is the same as in DSD (valuing timing information higher than bits), only more "convoluted".
  10. Reading up some more on the tech, it seems there is one more trick, which might explain the differences in quality people perceive. Bob is talking about doing sampling not using square "windows" but B-splines of order 2 (triangular over 3 samples) and higher. There is convolution with triangle or higher-order function during sampling, and interpolation during playback. So even at 44KHz, each sample will be based on several (4+) real samples, with different weights (by default, pyramid-shape triangular). This reminds of DSD somewhat, though Miska might be able to clarify the similarities and differences better.
  11. Hi, Miska. In fact they only "keep" the 13 bits of the original sound. They compress (perhaps lossily - they can claim to be "lossless" based on the fact that it's a new recording) the rest of the high-freq bands and put it into separate data streams. Bob with his partner created a few years earlier and presented at AES http://www.aes.org/e-lib/browse.cfm?elib=7964 a technique to hide data (watermarks/DRM/etc) inside audio, using the highly-compressed data as a dither pseudo-random source and some sideband data to allow extraction of the dither-prng stream. So they use 3 bits on the original PCM stream plus the 8 extra bits, plus whatever they can get from the dithering technique above, giving them ~10-16bits per sample = 400-800Kbit/sec per channel, 0.8-1.6Mbit/sec stereo data stream to put all the compressed data into. Actually, dropping the stupid goal of playing a compressed stream on unsupported devices, your idea of compressing higher-frequency bands with vorbis is basically what they're doing. But to keep the "layered" approach to quality, they have the extra layers hierarchically divided: the "legacy" 44/16 stream, then a layer of extra 22KHz up to 96/24, then a layer of extra 44KHz up to 192/24, then a layer of extra 88KHz up to 384/24, up to the "top" quality being recovered using a special "touchup" layer, being the compressed difference between the output of all layers decoded and the original file.
  12. Asking about embedded ttp server serving a single-format audio stream, its format configured by the user, as audio output target from HQPlayer. There's nothing complex about serving a stream over HTTP, so you don't need to call it fancy names like "UPnP" or "media server". The complex part is the math behind HQPlayer, and that's why asking whether Miska would be able to add such output option. P.S. Many other applications can serve audio streams via http, among them VLC,foobar and whatnot. Asking specifically about HQPlayer, thanks for trying to assist
  13. Yes, interested in HQPlayer sampling algorithms. Not interested in UPnP-style "media server", as there is only 1 URL needed, that won't ever change its stream type, format or any other parameters. It's basically an audio sink at 192/24 that can be read over HTTP by Phantom.
  14. Hi. Mostly this question is to Miska. Is it possible to get high-fidelity output from HQPlayer via embedded HTTP server? Devialet Phantom can be trivially redirected to play an HTTP stream, with any supported format (FLAC/ALAC/WAV included). The DAC inside is TI PCM1798 and it should accept 192/24 input easily (in fact it plays such FLAC files). So question is whether it's possible to have HQPlayer have internal HTTP webserver which, when queried, answers with FLAC or raw WAV PCM stream at 192/24 such that HQPlayer upconversion can be enjoyed with Devialet Phantom? Thanks! P.S. link to Phantom HTTP redirection trick: Ongoing: Phantom/Dialog/Spark protocol deciphering and development - Page 2
  15. For example, check out specs on these: LVCMOS/LVTTL Compatible Oscillators
×
×
  • Create New...