Jump to content
IGNORED

ADC/DAC kernel correction aka deconvolution


jabbr

Recommended Posts

Something we wish to remove? No.

 

"Blurring" in playback is due to insufficient sorting of the consumer's chain; blaming recordings is an easy out, and means we can rest on our laurels, waiting for someone else to "fix things".

 

Ultimately, the Big Answer for people who want complete control of their source is to have full blown unmixing of the recording - stripping out, separating every strand in the mastering; and then putting it back together again any way you want - every listener is is a sound mastering engineer, as well. It's being done now for professional purposes, in a limited way - throw enough computing power at the job, with good algorithms - job done! Of course, a long way from being really useful as yet - but give it time ... ^_^.

Link to comment

Mikes have a character and deliberately so. It’s part of the palette and element of the creative process in the studio. Often singers seek for a long time till they found the right microphone matching their voice. For guitars two of more mikes are the norm. 15 or more on drumsets are not uncommon, often very different ones.

 

Trying to ‘deblur’ mikes would ruin that and make little sense.

 

The whole idea rests on a preconceived notion of the possibility of an accurate representation of a reality. There is no reality in recording, it’s all part of the game of creating an audible product. It’s all asthetics - in the double sense of the word.

Link to comment
1 hour ago, mcgillroy said:

Mikes have a character and deliberately so. It’s part of the palette and element of the creative process in the studio. Often singers seek for a long time till they found the right microphone matching their voice. For guitars two of more mikes are the norm. 15 or more on drumsets are not uncommon, often very different ones.

 

Trying to ‘deblur’ mikes would ruin that and make little sense.

 

The whole idea rests on a preconceived notion of the possibility of an accurate representation of a reality. There is no reality in recording, it’s all part of the game of creating an audible product. It’s all asthetics - in the double sense of the word.

 

There is no requirement to deblur. It would be implemented in software eq HQPlayer or A+ etc and the decision to apply a deconvolution kernel would be up to the implementation. This is discussed as providing the same supposed benefit as MQA but in an open/nonproprietary  fashion.

Custom room treatments for headphone users.

Link to comment
35 minutes ago, bibo01 said:

How is that "aberration" going to be calculated? Every publisher has to stick to to the same method.

The list of correction would be published by microphone manufacturers and rec studio?

 

The basic technique is to compare a known signal against a recording of a known signal.

 

In astronomy and optical microscopy the "point spread function" http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/PSFtheory.pdf

 

... but more important to compare known vs recorded ... both are converted to frequency domain and then a simple division results in the deconvolution kernel. So for example a known low phase error sine wave reference signal is equivalent to an optical point. Same idea as impulse response. Same idea as room correction kernel.

 

What is produced is a deconvolution kernel (there are different techniques including estimation) and folks who have access to the equipment (microphones etc) can publish deconvolution kernels ... no need for the manufacturer to do it.

 

Is this necessary? It would do everything that MQA claims in terms of "deblurring" because ... that's how we deblur ;) 

 

Now what is super cool is that HQPlayer can run a deconvolution on a music stream in realtime!*** That is what makes @Miska's software so cool in my view (and in the SDM domain no less) ... back when I was doing this in the 1980s I can assure you that we were not doing this in realtime.

 

*** to be clear I don't know the limits of this capability but @Miska could comment

Custom room treatments for headphone users.

Link to comment
9 hours ago, bibo01 said:

Is what you are suggesting similar to professional photo programs (e.g. Lightroom) where you can insert and adjust for the particular camera/lens employed in the picture?

Yes. However, what MQA claim to do is equivalent to correcting a picture that is actually a composite of 100 photos taken with different cameras.

Link to comment
49 minutes ago, mansr said:

Yes. However, what MQA claim to do is equivalent to correcting a picture that is actually a composite of 100 photos taken with different cameras.

MQA claim that in early digital recording days, there were a very small number of models of ADCs being used in recording studios, so that it is fairly easy for  them to model them and figure out which one was used on a particular recording - essentially they have an algorithm that "reads" the output  and t makes an educated guess if it isn't known which equipment was used in the recording. 
For more modern recordings they claim the ADCs used are generally known, so they can model them and apply the correct "deblurring". 

Main listening (small home office):

Main setup: Surge protector +>Isol-8 Mini sub Axis Power Strip/Isolation>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three .

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment
6 hours ago, bibo01 said:

 

Isn't this very similar to what a calibration file for measuring mics is?

No, although I don’t know the limits of what mic calibration files are used for. 

 

Deconvolution is classically a frequency domain processing action. @Ralf11 references to computational photography is very apropos. 

 

As an example that Im very familiar with (and is widely referenced) imagine a 10uM microsphere images using optics. You will see the sphere at the center and then surrounding ripples. Now imagine a 3D stack of images at different focal offsets. The ripple/diffraction pattern will vary from slice to slice. 

 

Now imagine an imaged structure eg a chromosome or intracellular structure. It will be blurry. 

 

Take both image sets along with a model corresponding to the known 10uM sphere and transform into the Fourier space/frequency domain. Divide the model by the microsphere to derive the deconvolution. Multiply the cellular image by the deconvolution and transform result back into spatial domain. 

 

The result will be sharpened. You might, for example, visualize DNA supercoils and other macro molecular structures.

 

This is deblurring.

 

Similar operations can be used for audio in both the spatial & time domains with suitable recordings, but we are limiting ourselves to the time axis for the purpose of this discussion (MQA only claims temporal deblurring).

Custom room treatments for headphone users.

Link to comment
1 hour ago, pkane2001 said:

 

PSF in optics is equivalent to an Impulse Response in audio. One can deconvolve using a properly derived IR.

 

IR can be derived from capturing a Dirac pulse or from a sine frequency sweep. IR contains more than just the frequency correction that normally would be in a mic calibration file. It also captures timing errors, reflections/reverb, and other frequency, amplitude, and timing errors. I call it a fingerprint of the system.

 

Right!

 

1 hour ago, pkane2001 said:


As an example of the opposite effect of re-blurring (not de-blurring!) I've captured IR from my speaker system and then applied it to headphone playback through convolution. The result was a much more spacious, reverberant sound that makes my headphones sound a lot more like my speaker system in my listening room.

 

Yes!

 

What is interesting to me about this possibility is not merely temporal deblurring rather spacial i.e. "sharpening" of the soundstage, and transforming between different numbers of input and outputs i.e. multiple microphones, multiple speakers

 

1 hour ago, pkane2001 said:

 

I would be very surprised if a well constructed mic and ADC system would require something like a deconvolution to 'deblur' it, but I've been wrong before ;) 

 

I think this is what MQA is promising with "temporal deblurring" and yes, its entirely unclear to me that this would be a benefit in well constructed systems. I'm not really concerned about microphones (which as you and others say can give an artistic effect), rather if an ADC had high jitter thus "widening" or "blurring" of the peaks, then this would be a way to sharpen impulses, peaks etc.

 

Another area that is probably the most significant in terms of temporal "blurring" would be the use of certain filters that as has been mentioned, cause ringing.

 

1 hour ago, pkane2001 said:

 

Then again, a valid point was made earlier in this thread that a performer/artist often picks microphones based on their sound. By deconvolving it, you'd be destroying some of the original intent that artist.

 

Yes, and many types of distortion are intended, such as the amplification of the electric guitar tube amp itself -- the tube distortion is intended. Deconvolution is a tool. Point here is to shed light on the techniques which are hardly proprietary to MQA.

Custom room treatments for headphone users.

Link to comment
1 hour ago, mansr said:

It still doesn't explain how they can "deblur" a mix made from dozens of tracks.

 

a) is there blur?

b) access to the source tracks for deconvolution prior to remastering would be best -- if it were actually necessary, but this isn't the promise of MQA

c) the deconvolution, if needed, could be written to a 24/192 kHz FLAC, for example, which has enough overhead to resolve -- the benefit of Hi-Res is that there is enough redundancy of information (overhead above 22 Khz) to enable transforms without loss of real information.

d) I view this as a essentially remastering/DSP operation, and really no reason for the DAC to know about which corrections have been applied.

Custom room treatments for headphone users.

Link to comment
3 hours ago, mansr said:

Yes. However, what MQA claim to do is equivalent to correcting a picture that is actually a composite of 100 photos taken with different cameras.

 

Yes ... these techniques are decades old and taught to undergrads these days ...

 

This talk has great photos & diagrams!

https://graphics.stanford.edu/talks/compphot-publictalk-may08.pdf

 

 

Custom room treatments for headphone users.

Link to comment

There are couple of aspects to this...

 

1) Source may consist of mixture of material at different sample rates, example recording of such is Pink Floyd - The Endless River

2) For older material, used ADCs are frequently not known

3) In very modern production, it is not unusual to have loops of ADC + DAC + ADC with analog mixing desk and effects on the path (essentially same workflow as with analog tape, but using digital recorder with ADC/DAC instad)

4) Multi-track mix has tracks coming and going, with level changes relative levels and and number of source tracks doesn't stay consistent over time of the track

5) Multi-track mix has eq, compression, noise gates, etc applied between ADC and the mixdown

6) At mastering stage there may be more eq, compression, etc applied

7) At mastering stage sample rate conversion may take place

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
4 hours ago, Miska said:

There are couple of aspects to this...

 

Yes of course -- entirely unclear that "deblurring" is necessary, and particularly in heavily processed productions its also unclear what this would even mean.

 

My question was whether HQPlayer's FIR filtering capability (used in room correction) could be used in situations where an impulse response could be defined and for which "deblurring" would be meaningful? I'd assume that with a single dimension (time) that this would work, but e.g. 2D (spatial) no way to use both channels..

Custom room treatments for headphone users.

Link to comment
10 minutes ago, jabbr said:

Yes of course -- entirely unclear that "deblurring" is necessary, and particularly in heavily processed productions its also unclear what this would even mean.

 

My question was whether HQPlayer's FIR filtering capability (used in room correction) could be used in situations where an impulse response could be defined and for which "deblurring" would be meaningful? I'd assume that with a single dimension (time) that this would work, but e.g. 2D (spatial) no way to use both channels..

 

For replacing ADC or SRC impulse response, I think apodizing upsampling filters are good choice and work well. That applies especially for "1x rates" meaning 44.1k/48k where typically the anti-alias filter's effect is strongest.

 

Yes, the convolution engine is very generic, and you can do all kinds of 2D/3D things too when you use the Matrix processing feature. People regularly use it for processing different kinds of cross-feed, because you can take any source channel, process it through convolution engine and mix it to any output channel. At the moment you can have 32 of such virtual pipelines, but that is just arbitrary limit that can be raised if necessary.

 

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
19 minutes ago, Miska said:

 

For replacing ADC or SRC impulse response, I think apodizing upsampling filters are good choice and work well. That applies especially for "1x rates" meaning 44.1k/48k where typically the anti-alias filter's effect is strongest.

 

Yes, the convolution engine is very generic, and you can do all kinds of 2D/3D things too when you use the Matrix processing feature. People regularly use it for processing different kinds of cross-feed, because you can take any source channel, process it through convolution engine and mix it to any output channel. At the moment you can have 32 of such virtual pipelines, but that is just arbitrary limit that can be raised if necessary.

 

 

Thanks. The ability to do generic 2D/3D convolution is a feature that folks here don't seem to appreciate. "temporal deblurring" isn't groundbreaking, and MQA probably doesn't deliver on that promise for the reasons above, and even if it does, its something that could be replicated with well known and well understood techniques.

 

This capability enables a significant degree of processing in the native DSD domain. You haven't enabled HQP to be able to save output to file (one might want to be able to process and save tracks to a file, for later recombination i.e. mastering/remastering) but that could be done with an file output ALSA driver ;) 

 

More interesting to me, particularly given the focus here on "soundstage" and "imaging" would be spatial sharpening deconvolutions such that instruments could be precisely focused within an arbitrarily sized soundstage.

 

Of course this would eat up a ton of CPU cycles but we are yet again entering new performance levels of CPU/CUDA processing.

Custom room treatments for headphone users.

Link to comment
7 hours ago, jabbr said:

 

Yes ... these techniques are decades old and taught to undergrads these days ...

 

This talk has great photos & diagrams!

https://graphics.stanford.edu/talks/compphot-publictalk-may08.pdf

 

 

 

people should go to that URL just for the pretty pics, if nothing else

 

most will need a Dummies Guide however

 

I found a good looking course given by the Melon Heads but it does not appear to be online

 

also found this:

https://www.class-central.com/course/udacity-computational-photography-1023

 

 

then there is the prospect of not setting up any calculations at all -- just let an AI system eff with it until it sounds good...

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...