Jump to content

AMP

  • Posts

    61
  • Joined

  • Last visited

  • Country

    United States

3 Followers

Retained

  • Member Title
    Newbie

Personal Information

  • Location
    Phoenix, AZ

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The platform is fully custom and proprietary, but it does use some open-source software and utilities like the Linux kernel, glibc, ALSA, etc. There's quite a bit more information in Mosaic Control under SUPPORT > About > Open Source.
  2. From the Mosaic User guide: https://dcs.community/t/mosaic-ug-3-2-now-playing/260 ———— The track format information will allow you to quickly determine the resolution of the current source track as follows: Lossless PCM is shown as “PCM” along with the bit depth and sample rate regardless of the file format being played DSD is presented as DSD or DSDx2 Lossy formats (AAC and MP3) are shown as the format and the bitrate (e.g. MP3: 320 kbps) MQA is identified with the authentication indicator and the original sample rate ———— We only identify bitrate for lossy files as that’s the only time it’s useful. If it’s lossless you’re going to get bit depth and sample rate as those are far more useful. @Julius we don’t provide technical support here. If you’d like some help please post on our support site. https://dcs.community/c/support
  3. Yeah, this isn't going to happen. Bitrate is irrelevant for a compressed stream (e.g. FLAC) as all it's telling you is the effectiveness of the compression on that particular file. For an uncompressed stream you can calculate it yourself bps = bit_depth * sample_rate * number_of_channels where sample rate is in Hz (44100 not 44.1). @Beolab and @KunterK We don't provide technical support here. Please raise your issue on our support forum: https://dcs.community/c/support Please provide information as to what you are searching for, what result you are expecting, and what result you are actually getting (screenshots are very useful). For issues with a service like TIDAL it's extremely useful to know what country you are in.
  4. First off, we never called this v2.0 for the Bridge and indeed it won't be v2.0. Probably 1.02. This is a bugfix release for the bridge FPGA and the same network firmware that was released at the beginning of the month for the other products. It's about to go into verification so it's coming soon (and that's as specific as I'm going to get). There's not a lot that can be done in terms of SQ in the Bridge software. Sure, we could change how it sounds, but then it wouldn't be bit perfect
  5. Announcing dCS Network Firmware version 398 We are pleased to announce the release of dCS network firmware version 398 for all of our network streaming products. This version was released earlier this month for Vivaldi Upsampler and it is now available for Vivaldi One, Rossini, and Network Bridge. All of our streaming products will now be running on a common codebase. This is primarily a maintenance release and includes a number of bugfixes and stability enhancements in all areas of the software. This release should significantly improve performance and stability when using Roon. The release notes (linked below) contain details of the changes as well as preliminary documentation of new features. The update will be available at 9:00AM GMT on 29th October 2018. There are no updates to any of the dCS apps associated with this release. As with all of our network firmware releases the update can be initiated from within the dCS app: Configuration > Information > Versions > Check for Updates You can also update via the embedded web page by selecting the “Device Settings” tab and then clicking on “Check for Updates” under the Internet update section. Network Bridge owners will also receive a main board firmware update to 1.01 which is being released to address the issue of the front panel LED not flashing when the network firmware is updated. There are no other changes in Network Bridge 1.01. Release notes for network firmware 398 can be downloaded from this link: https://dcsltd.a.cdnify.io/wp-content/uploads/2018/10/dCS-Network-Firmware-Release-Notes-398.pdf
  6. No, it wasn't designed for low bitrate streams. Nothing to worry about. In simplistic terms it makes it easier for the AES/EBU transmitters and receivers to handle high bitrate streams. Dual AES (by definition) only applies to 88.2KS/s streams and above. It will never support anything lower than that. Period. Single AES is more than capable of handling lower rates like 44.1K and 48K. The older clocks are really good clocks but they have the downside of forcing the user to manually change rates. Sometimes convenience wins. Yes, very much so. Nope. The two streaming cards are essentially identical. There are some spec differences, but those are meaningless in the real world. Nope. Makes no difference in this application. Bartok has the benefit of the streaming card sitting in close proximity to the DAC itself. No messing with cables. No external noise issues. It just works better. I keep hearing this rumour out there and I'm not sure who started it. I can assure you that it wasn't dCS. Bartok is just better. Scarlatti is a stellar DAC and I'm not sure if Bartok bettered it. Based on my experience with it to date I'd say that the two are probably equal. As for the others, there's no contest whatsoever. The Bartok wins.
  7. Yes Yes, it should. You'll need to change the output frequency of the clock whenever the base sample rate of the file changes. For 44.1, 88.2, 176.4, 352.8, and DSD the clock needs to output 44.1K and for 48, 96, 192, and 384 the clock needs to output 48K. The Paganini DAC cannot control the clock and although we're investigating providing this function via the Network Bridge we don't have any information on when (or if) that function will be implemented. We're typically using either Transparent XL or Nordost V2. In both cases one cable is at least as costly as the Bridge itself! We don't make any specific recommendations and use those two as those are typically what our dealers and customers are using with our products. None. You'll have the ability to play any of the rates supported by the Debussy. There is no configuration option for the audio clock in Roon or the dCS app. Roon has a setting called "Clock Master Priority" but that's related to grouped playback synchronization and has nothing to do with the world clock used for decoding.
  8. No, the ref10 only outputs a 10MHz reference. The Bridge needs a word clock at a frequency that is a multiple of the rate of the file being played (44.1KHz and 48KHz are the most commonly-used frequencies). The Paganini master clock is going to show superior performance. The Paganini DAC has an excellent clock circuit, but it is only capable of outputting a reference at 44.1KHz (not 48KHz) so it can't act as a master for all sample rates (won't work with 48K, 96K, 192K or 384K). If you're going to use the word clock functionality then all of the devices in the digital chain need to be synced to the SAME clock. That clock needs to provide a reference at a multiple of the sample rate of the file being played. In the case of the Paganini clock that means that you will need to manually change the clock's output every time you switch playback between 44.1K based sample rates (including DSD) and 48K based sample rates. Clock cables need to be of good quality, BNC terminated, and have 75 Ohm impedance. The impedance is typically the difficult part in any digital cable as many that say they are 75Ohm are quite a bit off that spec. There are also two versions of the BNC connector (50 Ohm and 75 Ohm) with the 50 Ohm version being far more common. Best to purchase from a reputable manufacturer, check the specs, and be sure they're using the right connector.
  9. This is a really good point and one of the things that always comes up when decisions like this are made. We're always trying to balance where and how we apply our engineering resources since time is finite and the to-do list is always long. Part of the value proposition of purchasing a dCS product is knowing that you'll see it continuously improved over the life of the hardware (our cycles are typically 6 - 10 years). Yes, the products are expensive, but our customers have learned that they can always look forward to enhancements and new features (which are typically delivered at no cost). In the case of MQA we received considerable feedback from existing customers that they wanted the feature implemented. We also received quite a bit of feedback from prospective customers stating that they considered MQA to be an important deciding factor in their purchase. We collected this feedback for years and only made a decision to proceed after it was clear that this was something that our customers wanted to see in our products. Rather than take the approach of jumping on the bandwagon and beating the MQA drum or trying to critique the format and talk our customers out of it we took a neutral path as we had the ability to make everyone happy. For those who don't like MQA (for whatever reason) they can rest assured that their PCM and DSD content is processed independently of the MQA components. For those who do like MQA (for whatever reason) we're pleased to offer what we feel is the most complete and correct implementation. The costs of the implementation were all carried by dCS and we haven't (and won't) attempt to recoup those expenses from customers. If MQA support is used and enjoyed by the majority of our customers then it was well worth the effort and expense. If nobody uses the functionality (or MQA fails as a format) that's OK too. We learned a tremendous amount from going through this process and the complexity of the integration not only led us to improve some other aspects of our code but allowed our engineers to hone their skills on a really complicated project. Ultimately, we respect the intelligence of our customers and are going to let them decide what is important to them. There are lots of opinions and facts circulating in the MQA debate and the market is going to eventually decide winners and losers. In the mean time we delivered something that our customers asked for and many of them have expressed genuine gratitude for us having put in the effort. A lot. Thousands of hours. Engineers like solving problems and this one was a really interesting one. Taking a processing algorithm that operates in a very different way than our own and weaving it into our pipeline on demand presented a number of hurdles. Solving all of the problems that were presented made for an interesting and new challenge. We looked at this as an expense (it cost real money to do this) and an investment (our engineers are better at what they do for having gone through it). The investment part will help us deliver new features and products that are really innovative so regardless of what happens to MQA it was money well spent. There are a number of different ways to implement MQA and it can be done quite easily by lower cost DAC manufacturers. There is a cost associated with implementation, but it shouldn't add significantly to the final cost of the DAC. MQA is purely a software solution. Some manufacturers have had to change hardware due to the fact that the MQA software wouldn't run on their previous hardware implementation. For instance, we can't implement MQA on our legacy products as there simply isn't enough room left on the FPGAs. They're full.
  10. This gets touchy given the legal agreements in place, but at the same time I want to be as transparent as I can about what we implemented. MQA pros / cons aside it was a very challenging development project given the way in which our products work and we're very proud of our engineers that made it happen. I think that I'm safe in saying that we were given everything that would be required to implement an MQA full decoder as a fully integrated component of our processing platform. We chose the final architecture due to the resource needs of the various MQA-related algorithms. As we're writing code for FPGAs in VHDL we're not using a language typically used for general-purpose DSP chips or standalone software applications. We essentially had to start from scratch. I have no knowledge of MQA's agreements with other manufacturers so I can't comment on what they are or are not willing to do. What I can say is that the architecture that we proposed was compelling enough to MQA that they elected to provide us with the necessary resources to implement it. Our solution is absolutely unique in the market and we continue to refine the way that it operates as a value-add for our customers. Before that statement raises another question I'll say that we aren't changing the way that the MQA algorithm operates, rather we're refining the way in which the MQA processing components handled within the signal path to make transitions as fast and as seamless as possible.
  11. Yes. We considered this a non-negotiable item.
  12. This question came up on another site and our director of product development provided a very thorough response which I'll include below. As for the headphone section being an afterthought I can assure you that couldn't be farther from the truth. This product was designed as a headphone DAC from day one. We're occasionally overly conservative in our specs, but the intention was always to have Bartok capable of supporting whatever headphones the customer preferred. Take your favorite pair to a dealer and have a listen for yourself. Here is the technical explanation from our director of product development: The Bartok headphone amplifier is heavily optimised for operation into loads of 30 Ohms or greater, but this doesn’t mean that it won’t operate linearly into lower impedances. As the amplifier operates in Class A for all practical purposes, the maximum power dissipation in the output stage is when the amplifier is idle. Driving the amplifier diverts this power dissipation away from the output stage and into the load. As a result, the amplifier is quite tolerant of lower than rated loads. Into very low impedances (let’s say less than 8 Ohms), and at high output levels, there will be an increase in output stage dissipation, distortion will increase a little, and operation may move away from Class A at very high drive- which we define as levels that would not be encountered in any practical situation, i.e. levels that would be hazardous to both ears and headphones!
  13. I can shed a little bit of light on what we did. The MQA core decode is performed by our network streaming processor via a separate processing pathway. We essentially sniff the content of the incoming stream and route it accordingly. Regular PCM goes straight out to the DAC and MQA is routed through the core decoder. Inside the DSP section of the DAC's FPGA code we again sniff the content of the bitstream and assemble the various DSP components accordingly. Regular PCM goes through one set of DSP components specific to the particular sample rate and MQA goes through another. This added a ton of complexity as MQA's rendering components are somewhat different in structure than ours and that results in a lot of moving parts that need to be aligned when the track format changes. We didn't bolt on MQA's code, rather we implemented their algorithm within our DSP engine and it's applied when needed, but completely out of the signal path otherwise. The end result is a completely different processing pipelines for MQA, PCM and DSD. No format is given any sort of priority and all are processed with the highest degree of precision that the format will allow. We've always stated that we think that MQA is just one of many possible formats that a customer may choose to play. We don't consider it to be better or worse than anything and want our customers to have the flexibility to play whatever they want knowing that the bitstream is being handled in the best way that we know how. Whether the market fully embraces MQA or not isn't an issue for us. If the uptake grows then we have support for the format baked in. If not then there's nothing that has to be undone in our products and customers with MQA files will continue to be able to play them. We were able to do this in our products because they're completely software defined and the original designs were specified with lots of processing headroom.
  14. Yes. Not sure I understand the wording of your question. From a hardware architecture the Vivaldi is quite a bit different than the Rossini and quite a bit more advanced in terms of functional multiprocessing. The software is quite different as well. The difference between the two isn't subtle and although the Rossini will be getting closer in the near future, the Vivaldi will always be "better." (both subjectively and in terms of measured performance) The Vivaldi platform still has a lot of life left in it. Someday it will get replaced and as we did with Scarlatti we'll be sure to take care of those people who made an investment in Vivaldi.
  15. Chris covered it in his write-up. In terms of hardware Bartok has a less complex power supply and a less complex chassis. The real difference will be in the software. As our DAC architecture is software-defined we can realize performance gains through a simple firmware update. In the case of Bartok the software is very similar to Rossini as it stands today. Rossini will get an update to apply some of the algorithms from Vivaldi.
×
×
  • Create New...