Jump to content

anwaypasible

  • Posts

    115
  • Joined

  • Last visited

  • Country

    United States

Retained

  • Member Title
    more than 30 ways to decode

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yet the length of this thread & how balderdash the quantifiable details are, there is more done to demoralize wishful thinking at a place meant to build the industry than there is profit making it to your stocks & bonds.
  2. sensitive to contrast easily sums that up.
  3. if it was a snake, it would bite you in the nose - ever heard of that? anyways, your hateful ostracizing of my character is known as a hate crime & you'll be treated as such. i'm german, irish, canadian, indian - that makes me more of an english speaker than you (and probably why you are confused being spitefully antagonistic)!
  4. a transducer is defined by a coil of copper & a magnet. going by your way of thinking, a laser uses an optical sensor transforming the signal directly to digital - you are thus wrong again. when the frame of mathematical equation A fits into mathematical equation B - there is zero chaos. speakers can be perfect, but because you don't know how they work they must not exist is what you are demanding. you are an unjustified hateful skeptic & you'll be treated as such.
  5. while your point is valid, brains can hold the virtual ideal & listen to the room interactions to build a comparison - with such low bitdepths of convolution it remains to be more than a chore to do. then factor in how the end-listener doens't know what it is supposed to sound like - they aren't getting a chance to do the extra workload & thus fallback to the other half of the argument where the room interactions are going to be the mark of the final verdict. though to mix things up a little bit, even a room that luckily has a chip with a bitdepth high enough to fully remove the rooms echo, the soundwaves continue to traverse the course of the room in the time domain to continue utilizing virtual speakers & creating a soundstage.
  6. you seem to be a non-believer of weightless possibility. makes me think levitating magnets freak you out.
  7. pointing out the fact of headphones being single full-range drivers should quell thought.
  8. a laser microphone isn't a transducer. any microphone that records audio at a higher distortion percentage than the microphone's limit is capturing the signal perfect. anyways, i've got the way nuclear speakers work on my facebook page - the reverse holds true to build a nuclear microphone & the only reason nuclear speakers are built is because they reach the mecca for more than one room size (speakers are limited to one specific room size). seems odd that you put that much weight towards a situation that can backfire. such as a microphone element that is magnetically biased to exaggerate all movement (or not) & is held at a gravitational freedom of static bias - because when inertia can meet neutrality, you claimed it isn't possible; but all that really tells me is the word phenomenon holds true to it's typical awe struck glare people give when they experience a phenomenon. i might as well go on to say, why did they ever bother to create microphones & speakers if 'no transducer will sound exactly like the source' .. right? because then the audio we hear out of the speaker doesn't resemble anything like what was supposed to be recorded. i guess by your way of thinking we all enjoy listening to jibberish noise from the speaker cones.
  9. a laser microphone works by simply using chorus down the beam to rotate back a difference that ultimately depends on the sensitivity of the input sensor. it's like having a sword & anything that touches it (including the air) is going to cause the chain-linked web to deviate from default & that deviation is audio. ..somebody said something about mics being the weak point.
  10. lol, it isn't the impulse response file's fault either. i like to think about quantization & audio, because i know what must be stripped away from the audio at lower bitdepths is the added sound of air, which sounds like spaciousness that can re-create an atmospheric pressure to re-live the night (minus the smells, though that's just another bitdepth problem). again, we started with midi sounds & those sounds come directly in front of the microphone - there is zero air of distance between instrument & microphone. then increase the bitdepth & you get quantified frame space of allocated air. it translates to a literal distance limit & thus the distortion leaving that allocated space trails off. but this brings us to dither. the reason why dither gets confused is because of the act of going from a higher bitdepth to a lower bitdepth & people end their thought or discussion there. it doesn't matter if the distortion trails off as logarithmic or linear fashion, all that matters is the data sets are known. if i started off at a bitdepth that was 1024, i could record a single click & the dithering would traverse that bitdepth from 1024 all the way down to 16bits & it would be the same exact 16bit sample if i re-ran the process of lowering the bitdepth from 1024 down to 16bits time and time again. that means i can also dither up from 16bits up to 1024 with 100% accuracy every single time. the same can be done for video data too, so don't lose those photos simply because the resolution is low. **edit** oh, the reason why i like to think about it is because - well let me give you an example. imagine i wanted to pull a prank by taking a photo of the entertainment center & then removing it only to replace it with a full sized printed picture that gives the 3d illusion of the entertainment center still being there - but there's something wrong that is costly. if that picture had plus or minus the truth of light from the ceiling fan hitting it, the 3d illusion would appear fake. but imagine if they were low on red ink & offset the entire picture by some shade of red - then totally replaced all of the red by using a little red light added to the ceiling fan. it could be done & it isn't entirely impossible, as long as the light absorption of the paper is included in the calculation of the final 3d rendering that is adjusted for the little red light being shined onto it. i think of the same when removing air using quantization distortion & then putting all or some of the air back into the final rendering - simply because it is going to be there in the room anyways. if you know the hue of the saturation, you can process the track offset with the opposite color & then when the two ends meet - the objective final color is met. that goes the way of using higher bitdepths & bands traveling to specific areas because the atmospheric pressure adds to the reign of the song. (oh, don't forget time of year too)
  11. pre ringing to me is when the quantified airspace of what the convolver frames has a gap with the frame of normal allocated space. that can only happen with say 16bit audio going through a 24bit convolver, because thanks to the size difference between the two there is room for a gap (doesn't work/happen when staying within the same bitdepth). or another way to subjectively comprehend what i am attempting to say. imagine you've got an audio track with phase that varies from 1 through 180 degrees. if you run the audio file through a filter that process all of the phase down 3 degrees, there will be some spots in the audio file that are -2 & when those samples attempt to play from the dac, it clips causing a tiny squelch sound we perceive as pre-ringing. it's because going beyond 0 isn't possible as the frame only exists as 0-360 (or cut it up more than 360 if you want). it isn't the convolver's fault - it is the fault of the processing scheme done to the impulse response file.
  12. what if a person was told there are details coming from a set of speakers they can't hear. that person could prefer those details exist from the system despite ever hearing them, for wanting the details to exist for other people or simply to exist in the air space of their presence together. then it would thus be 'needing to know' to hold a preference rather than 'needing to hear'.
  13. because the pieces are made for some sort of shelf life; they aren't made fresh & ready to be used immediately.
  14. if you want to play around with extreme crossover points, this has 70 something dB slopes: http://www.tokyodawn.net/tdr-nova/
  15. only when you do, for example, a ten band equalizer by ear one at a time. in the end your phase & timing was good individually, but you'll never hear it because of the room's reflections. use a microphone to hold constant to a specific period in time (cycle the same rate) & the amplitude becomes visible. equalization can happen for any sweet spot - but any uncalibrated spot cannot be imagined. **edit** plus there is no telling what portion of the room the person is listening to as the sinewave is just rumbling away in the room while adjusting an equalizer knob & that point proves time & true why a calibrated microphone is better than adjusting by ear. **edit** though some say make a tail & then move the line down with pink noise until the sound is as diminished (the audio turns the room to heat) as much as possible - that is how it is done without a mic on a 31 band.
×
×
  • Create New...