Jump to content

gordguide

  • Posts

    25
  • Joined

  • Last visited

  • Country

    country-ZZ

Retained

  • Member Title
    Newbie
  1. " ... .jpg is standard. ..." .jpg is the DOS/Windows standard (maximum 3 characters, due to limitations in the FAT file system). Virtually no other OS limits the file extension to three characters, and no modern OS from anyone does today. On MacOS (prior to OSX) the system, when asked, would create 4-letter extensions by default and read either 3 or 4-letter extensions from other OS's. MacOS did not require file extensions (System1 through System7 and OS8 and OS9); instead metadata with each file contained both a type and creator code, each four letters. When asked to save a file with an extension, Macs used one of those four letter codes (eg a JPEG would be .jpeg file while a Photoshop file in JPEG might choose the PS creator code instead). Three letter extensions are unnecessary today, and developers on any OS should recognize the common three (DOS, FAT) and four (everyone else) extensions. A modified file system called VFAT was introduced in Windows95/NT3x that allowed longer extensions, without which, for example, you would not be able to program with Java on Windows ( e.g. .java + .class are required in the source code files). Pretty much every other OS and File System can either can either use more than 3 characters (including any Microsoft NT-based OS like XP, or Windows7) or doesn't use file extensions at all to identify files (like pre-OSX Macs). I say "pretty much every other OS" but frankly I don't know of a single one that isn't DOS-based that doesn't accept more than three characters ... Sun, OS/2, going back to Multics OS of 1964, the inspiration for modern UNIX-like OS's. OSX and other UNIX/Linux OS's can understand file extensions of any reasonable length but unlike DOS the "." is just another legal file name character, so it does not necessarily represent an extension. OSX can also still recognize type and creator codes (but does not write them to new files) so a file extension is not strictly required. If three letter extensions are a standard (which I don't agree they are; at best they are a convention) they exist only for back-compatibility to Windows3x and DOS. I think we can stop naming files for those two by now.
  2. Barry: Made at the Americ Disc plant in Drummondville Quebec.
  3. I'm quite sure about it ... I verified it myself from a CD-R ... the $20 ones ... that was made using mastering software ... on or about 1996. When the discs came back from the duplicator I checked again; the resulting manufactured CDs had correct polarity. I actually first noticed something during listening tests since the band members and a few others were all sent a master made at the same time ... I think he made 8 and selected one after verification of the data to be sent to the duplicator. The rest were distributed. I opened the disk in an audio editor and noticed the inverted polarity. I was quite surprised myself that I actually heard it, since I had no idea that I was sensitive to it although I knew some people were. I spoke to the engineer about it and he said that it was a requirement of the duplicator, and that mastering software would always produce such disks, because The Glass Master is a mirror image, not a copy, of the data on the duplicator's digital source. Studio Master (-) --> Glass Master (+) "father" --> Stamper (-) "mother" --> CD (+). Things have changed (a lot) since then, of course.
  4. " ... Other applications get round this limit by setting the sample rate of Core Audio themselves. Maybe it would be fairer to say it's because of how iTunes interacts with Core Audio. ..." Exactly.
  5. " ... Submitted by portfair on Wed, 03/23/2011 - 15:40. gordguide, In your post "Even gordguide, In your post "Even today, getting a copy of the various books from Sony/Phillips....." you've written nothing that I disagree with or that I hadn't already seen elsewhere. But you don't address your statement that "a Redbook CD stores the data multiple times" and so far I've found nothing to support this. By the way I'm assuming that statement means what it says, that the audio data is stored at least twice. Is that what you meant to say? There just isn't enough space on a disc for this to happen. I also took at look at the link you provided and yes there is something seriously wrong with the authors maths! This seems a bit better http://www.usna.edu/Users/math/wdj/reed-sol.htm ..." How about this: " ... The smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples (two bytes × two channels × six samples = 24 bytes). The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte, used for control and display. Each byte is translated into a 14-bit word using eight-to-fourteen modulation, which alternates with three-bit merging words. In total there are 33 × (14 + 3) = 561 bits. A 27-bit unique synchronization word is added, so that the number of bits in a frame totals 588 (of which only 192 bits are music). [italics mine] Since there is 747 MB of music, what are the other 396 bits used for? Obviously they are there on the disk ... so the full capacity of data storage on a CD cannot be simply 747 MB. Surely the housekeeping code can't be twice the data of the audio alone, can it? Did you read Pohlmann's chapter via Google Books?
  6. " ... Read what I said... @gordguide please read what I said in the context of the discussion... "sample rate selection is a function of the OS and playback software NOT of the interface". I perhaps should have said "automatic sample rate selection" but as it was in reply to a previous comment that should have suggested what was being discussed. Idead not suggesting that Core Audio limits the maximum sample rate. My comment was in reference to the "correct" sample rate being sent to the DAC. Core Audio and iTunes do NOT automatically select the (output) sample rate based on the file being played. They will resample the audio to match the sample rate selected in the Core Audio MIDI control panel. Other software (Decibel, Amarea, et al) can change the sample rate used by Core Audio as the sample rate of the files being played. Unless I have totally misunderstood the questions in this thread this is what was being discussed. The rest of your post (Class 1 USB audio vs Class 2.0 USB audio) is well known to me: in fact I've made similar comments to yours correcting posts. Eloise ..." Core Audio doesn't do any sample rate selection whatsoever. It only exists as a framework for audio function. There is nothing in Core Audio that prevents automatic sample rate selection, and the Audio Midi utility is not Core Audio. It is just an application like any other. Core Audio is part of the OS, and in contrast while I understand people see a GUI app like Audio Midi Setup as provided by default by Apple as "the OS" it really isn't. Audio Midi Setup could be supplanted by another app that does interface with the OS itself via Core Audio, for example. So, I don't see any limits in the OS itself that prevents automatic sample rate selection. It's just when people say "Core Audio and iTunes" or "Core Audio and Audio Midi Setup" prevent some function that is a possible point of confusion that I feel needs clarification. It's iTunes and Audio Midi Setup alone, which are simple applications, not Core Audio nor the OS, that limit the function.
  7. " ... Submitted by Audio_ELF on Tue, 03/22/2011 - 16:42. Sample rates... @rockrabbit (cool nick BTW). " ... Yes, unfortunately sample rate selection is a function of the OS and playback software NOT of the interface. Eloise ..." Just the playback software and the interface (for OSX). Core Audio has no sample rate limit. iTunes will play 24/192 if the hardware supports it. Audio Midi Setup only displays the sample rates and frequencies that the hardware supports. So, to see 24/192 in the drop-down list, a 24/192 interface must be connected. A firewire or USB Audio Class 2 interface will allow 24/192. The V-Dac is limited to 24/96 due to choices Musical Fidelity made (USB Audio Class 1 device; USB Audio Class 2 is supported in OSX 10.6.4 and above). The OS does matter however if you use Windows, which does not yet support USB Audio Class 2 so a 3rd party driver is required.
  8. " ... Yes, it's purpose is naturally not error correction, but error detection. And the basis is of course that if any single bit in a computer file gets corrupted, then there are much bigger problems in the system than an error in audio file... ..." I don't agree that drive failure is a symptom of a system-wide problem. Google ... who pioneered the idea of avoiding server-class drives and uses commodity drives like the rest of us ... has a great paper on it. All Hard Drives ... including factory fresh ones ... have bad sectors, and a number of spare sectors reserved for replacement of the bad ones. Factory re-mapping of sectors prior to shipping the drive is referred to as P-LIST. Modern file systems also check data and map out bad sectors on the fly (G-LIST), but usually only on the write process. A SMART utility can give you the list of both types of bad sectors on any drive. Drives in fact are constantly going bad over time. It is usually only when the spare sectors run out that the user notices. If you zero all data prior to formatting (or during a re-format), bad sectors are removed at that time rather than waiting for the file system to encounter them while manipulating data. The other issue that can arrive with computer audio is bad RAM, which also goes bad over time, although it is generally much more reliable than HD's. In that case the drive sector is good but corrupt data can be written to the disk. You will usually only notice when you try to read the data. The more stable your system the more likely you will notice strange or suddenly unusual behaviour; if your computer tends to crash a lot it may be misinterpreted as another problem. If you have memory to spare, it may be difficult to diagnose since the bad area may not often be actually used by the system. Just as with hard drives, you can use a utility (eg memtest) to verify correct RAM operation. Some RAM errors are caused by various forms of electrical interference, and are random in nature. A memory utility may pass RAM that is affected by this form. If your system can use ECC memory, and you intend to use it as an audio workstation, it's worth considering. That will eliminate memory errors not related to chip failure. Also, I think we've gone far enough off-topic on this issue, so if anyone wants to further explore storage issues rather than transmission of data to the DAC I suggest starting another topic.
  9. Submitted by Miska on Wed, 03/23/2011 - 01:37. Data errors " ... I don't remember all the details for ALAC, so I talk only about FLAC. Since it is still purely block-based it is less susceptible than normal file compression and encryption schemes where entire file is handled like a stream and errors can cause all the remaining file to become unreadable. ..." Well, I thought we were talking "bit-perfect" here. An unreadable compressed block is nothing more than a smaller version of a whole unreadable encrypted file. Data will be lost forever in that block should an error occur. " ... Good side is also that FLAC file contains MD5 checksum to make it possible to check for errors. Unlike WAV files... ..." The checksum is a "go/no go" system. It cannot fix errors, it can only validate that the original data remains unchanged; this is truly a bit-for-bit check, however.
  10. Even today, getting a copy of the various books from Sony/Phillips requires a payment of a fee, and might involve a Non-Disclosure Document (NDA). However, the RedBook Standard (applicable to .cda or CD-DA, ie a music CD) has been embodied in a number of standards, eg IEC 60908. There are ... *cough* ... *ways* ... *cough* ... to get that document if you search Google. I personally have never read it, though. I have a copy of The Compact Disc Handbook by Ken Pohlmann, and it distills the entire process into reasonably understandable terms. Do a Google Books search for that title, and start at page 60. You could also search the term "Cross-Interleaved Reed-Solomon Coding" (CIRC) which describes the error correction used on CDs. The first result from such a search I did gives the following in the body: " ... CD players uses Cross-Interleaved Reed-Solomon Coding, or CIRC. This begins by taking the 24 8 bit words in [sic] encoding them in a RS(28,24) code. With 4 parity check symbols, it is 2 error correcting. The data is then interleaved. This process distributes the information from this frame over 109 frames. This allows errors on a large portion of the disk to be distributed over many small parts of the disk. ..." Note that the above describes only one of the error correction routines, there are more. If you simply want a reference that show that the error correction data actually exists somewhere on the commercially manufactured music CD, look at: http://www.herongyang.com/CD-DVD/Audio-CD-Data-Structure.html ... and note that the math is sound, and uses format data from publicly available documents, but his (perfectly good) calculations confuse the author where he determines that a CD apparently contains 2,287 MB of data. Note he also determines the same disk contains only 747 MB of "music data only", and that the total information is about 3x the actual music information. Somewhat off topic, but relevant none the less, is the subject of lossless encoding. Certain methods exist specifically for lossless audio compression. A simplified summary of lossless encoding is it is a process for the complete removal of redundant data, which makes a FLAC or ALC file more susceptible to (uncorrectable) data error.
  11. Submitted by PeterSt on Tue, 03/22/2011 - 07:17 " ... Wrong track ? [snip] In the end there is one method only : loop back the track into a digital recording means (no ADC), and check for all to be equal to the original file (after finding the common denominator offset). ..." Leaving aside, for the moment, that there may be other means to test the data stream's integrity, I don't see that method as equivalent to the HDCD flag check, which allows the user to do so during real-time playback, and limits the variables to the systems under test, which is most useful when something is wrong ... either method is fine when everything goes right. Although the HDCD lock uses a relatively brief set of data (pretty much required for real-time validation) when the lock fails mid-stream, in my experience it's hardly subtle ... you won't need the "light" going out to notice it.
  12. Submitted by portfair on Tue, 03/22/2011 - 04:17 " ... "you can be (almost) sure" - so in fact you can't be sure! ..." I guess it depends on what you want to be sure of. The error will always be in the negative ... in other words if the HDCD flag can be read, it's bit perfect, but if the flag cannot be read there is a tiny chance it's still bit perfect but indicates it's not. " ... "a Redbook CD stores the data multiple times in multiple areas of the disk to provide robust error correction" - that's not correct is it? ..." You can easily look up multiple papers on the CD format if you want. There must be hundreds, if not thousands, online. The following is just a brief summary: The raw data rate from a CD (ie what is read by the laser) is 4.3218 million bits per second. Of those, 1.4 million are the basic audio data. The rest contains a bit of subcode, some EFM transcoding (how the data is actually stored on a CD) but mostly it is redundant audio data for error correction. Note that with CDs, "error correction" means perfectly corrected, with zero loss of data in real time ... "bit perfect" output if you will. Un-correctable errors (E23 errors) fall under error concealment. The Redbook standard dictates that the Block Error Rate not exceed 220 of the 7350 blocks per second (CD players can usually correct a somewhat higher error rate than that, and many manufactured disks show higher errors than Redbook allows. The highest quality CDs have BLER's of about 100 per second). The Sony/Philips Redbook CD specification enables the ability for complete and perfect correction in real time playback ... absolutely perfect recreation of the missing bits ... of burst read errors up to 4000 successive bits.
  13. Submitted by portfair on Mon, 03/21/2011 - 18:37 " ... the HDCD indicator on the Berkeley Audio Design Alpha DAC is a validation of bit perfection for normal playback of music...” I realise you're referring to data from a computer when you make this statement but I'm having problem making sense of it. [snip] ..." I hadn't actually considered HDCD until Chris mentioned it ... it's not really been on my radar since I no longer use a conventional disk player ... but it made perfect sense once he reminded me. It's in the nature of how HDCD encoding works. It has to be added at the mastering stage in production, prior to stamping out the CD. It operates at the least significant bit (LSB), where only dither would be on a conventional CD. HDCD encoding is supported on some computer players (eg Windows Media Player) but regardless, assuming the rip is good, it still will be amongst the music data on any computer. If the LSB is bit-perfect ... and the HDCD indicator won't light if it isn't ... you can be (almost) sure there's no alteration of the data. The level of certainty isn't 100% but its very near to 100%, to the point that if two different HDCD encoded disks get verification, that's enough to put away any statistical uncertainty. Perhaps the level of certainty is even greater when it comes to a digital file stored on a HD, since a Redbook CD stores the data multiple times in multiple areas of the disk to provide robust error correction, something that doesn't really exist with a music file on your HD.
  14. Thanks for the info Chris. My comments were more specifically directed at the OP's description of sample rate and frequency displays on "most HT receivers". That an HDCD flag on a disk player or DAC would be a good validation in a real-time stream makes perfect sense provided it's used with HDCD encoded CDs or rips from same. Unfortunately most CDs are not encoded, but many are, and if validation of your process is all you need, playing one disk should be enough. Since HDCD intellectual property (IP) is now owned by Microsoft (they bought Pacific Microsonics) I wonder what that means for the future of the process and licensing.
  15. " ... Submitted by noogle1 on Mon, 03/21/2011 - 15:15 Most AV receivers ...isn't that 'frequency perfect'? Surely each sample has to have the right data to be bit perfect? The frequency display can't show that. ..." Exactly (except maybe the 'frequency perfect' part ... not sure I would call it that, but that's unimportant). To determine whether something is "bit-perfect" ... regardless if you're talking about comparing a rip or a data stream ... you need to verify the validity of the data by comparative testing. An LED on a DAC or some other indication isn't any validation at all, it's just telling you it's locked on the data stream and what sample rate and bit depth that lock is at ... it tells you nothing of the accuracy of the transfer of the specific data on a "bit-for-bit" basis. It's not really possible to verify in a real-time stream. Perhaps the best you can say is it "should be" based on predictive analysis.
×
×
  • Create New...