Gear / Technical Help > Microphones & Setup

Thoughts on mic correction, specific to "what we do" and "how we do it"

(1/6) > >>

Gutbucket:
Last weekend I played around with using the white-noise-like sound of applause in the hall as source material for more closely EQ matching the perceived response balance between four microphones in a recording array I'm using.   I've long been aware that the particular way I have these microphones mounted affects their responses, making them less flat than their native measured response in free space, and in different ways for each pair.  Beyond that, I'm not using a closely matched set of four microphones, so there are slight but perceivable frequency-response variations and somewhat larger sensitivity variations between each microphone to begin with. 

Listening back and adjusting things later, I typically level balance and EQ by ear informed by memory, hunting for what sounds most natural and pleasing.  I do so by soloing each channel and adjusting for anything egregious in isolation before checking as stereo pairs, then adjusting further as necessary with them all in use together. The later stages are an iterative process, and it is clearly apparent when the adjustments approach optimization- everything snaps into place in a natural and relaxed way and I find myself transported back to the time and place of the performance rather than noticing particular attributes of the reproduced sound which aren't quite right.  This process improves not only the channel to channel level balance and overall timbrel balance but also the quality of imaging and the general impression of realism.  The entire process ends up correcting for the particular response of each microphone itself, the response effects relating to how they are mounted, as well as the particulars of the music, musicians, instruments, room, etc. 

This time I used the same process, but instead of level balancing and adjusting EQ while listening to the music itself, I did so while listening to the applause prior to and after the piece being performed, adjusting for naturalness and uniformity of applause timbre.  This worked rather well, allowing me to more quickly get a good basic level and EQ balance between all channels so that when I switched to listening to the music itself the needed adjustments were already 80% there and I could more rapidly home-in on what was most natural sounding and correctly balanced. 

I surmise this is due to a few attributes peculiar to recorded applause (perhaps classical applause stereo-typically, in that it seems to be more uniform, steady, and extended in time than the applause in other musical genera).  Those attributes being: a relatively balanced, wide-spectrum source of noise; relatively even source distribution throughout the space so that it acts as a diffuse source; and a relatively even balance between impulse and steady-state noise components.   I sat for a while considering the implications of seeking out highly diffuse noise environments in which to make recordings used specifically for calibration purposes, and what such a process would involve.

Here's the basic flow chart of what I'm doing now-
Raw recorded microphone outputs > channel balance and EQ corrections as necessary > corrected individual channel source material ready to be mixed/mastered

What I'm proposing is breaking down the middle correction part (in italics) into a couple separate sequential steps like this:

Raw microphone output > corrections for individual mic variation and their mounting > additional corrections as necessary > corrected individual channel source material ready to be mixed/mastered

Once determined, that first correction step can be reused for all recordings made through this setup until the microphones or the array in which they are mounted are changed.  This thread relates specifically to that first correction step.

Gutbucket:
I'm continuing this discussion from the Team Classical part 3 (open discussion of all things classical music) thread, where I posted an initial thought about it before deciding it would be best as a separate thread, since it applies generally and isn't specific to classical music recording.   In that thread, Jimmie C made the following comment , which I'll quote here to get the conversation going-


--- Quote from: JimmieC on May 17, 2017, 09:03:10 AM ---^ Made me think may be some of us with non matching capsules or brand of microphones could use a calibration file (eq, gain, etc).  Through a full spectrum speaker, what if you played a pulse (what ever desired time length) that sweeps between frequencies 20 Hz and 20 kHz.  We could recorded this at home using multiple microphones that are setup the same.  You could then pick the microphone with the best response and eq, amplify, etc to get the other microphone(s) to match the first microphone response.  One would be the dB and then match the FFT plots.  Then amply this post processing to the microphone after every recording.  I'm pretty sure in Audacity you can create a such pulse and I'm would image in other programs too.  It has been awhile since I have recorded anything so have not used Audacity in probably a year or two.
--- End quote ---

The process I'm proposing is similar to what you are suggesting.  Indeed, that is basically how the Tetra-Mic is calibrated.  Tetra-Mic is an ambisonic microphone using four coincidentally-mounted capsules which requires extremely close matching between capsules for the ambisonic matrixing to work correctly. Core-Sound provides a calibration file with each microphone, which contains corrective filters for each capsule determined via methods similar to what you've outlined.  The raw recording needs to be made using the same gain across all four input channels.  The result is recorded "A-format" 4-channel microphone output, which includes all individual microphone capsule variances. Afterwards the A-format output is sent through the respective corrective filters and saved as corrected "B-format" files in which the response of all channels are fully matched.  The B-format material can then be manipulated to point virtual microphones of whatever pattern in whatever direction one chooses.

But there are a few important differences, which I'll cover in the next post.. 

Gutbucket:
Instead of adjusting the measured response of each microphone individually in isolation in reference to a test signal reproduced through a speaker, this proposed method relies on recording a fully diffuse sound source through the entire recording rig, just as it would be used for music recording.   At least one channel needs to be subjectively EQ balanced.  The other channels can also be subjectively balanced with respect to the first (both in terms of level and EQ), or they could be deferentially matched to the first using an auto-matching EQ or perhaps something like Audio DiffMaker.  That may be advantageous in that if slight subjective changes are deemed necessary they only need be made to the first channel, and the matching EQ or difference tool is then used to match the other channels to the first. Otherwise just balance and EQ them all to be as close as possible by ear.

Doing it that way eliminates a few potential problems and is beneficial in other ways. It eliminates many of the measurement hassles: the need for making a test signal; the need for a truly flat speaker source (or calibrating a speaker using a flat measurement mic); difficulties of assuring measurement of all mics is made at exactly the same point in space with relation to the speaker without other environment variations; and undesired room or environmental responses in the test setup. 

It is beneficial in that it: corrects for the mic response "as mounted" (one of my primary goals) along with any variances through the entire recording signal chain; and corrects in a subjectively preferred way which is likely to be closer to the desired starting point for mixing

A hassle is finding a sufficiently highly diffuse environment and natural noise test signal to record for making the calibration.   I suspect it might be helpful to constantly rotate the microphone array during the test recording.  That would average the room response for all mics - as they each end up pointing in all directions over the course of the test recording.   A central location on floor of a large gym, warehouse, or other large public space would probably work.  A large recording venue may work, using applause as a diffuse source, although it might look funny spinning the mic stand or doing pirouettes during the applause while stealthing.

noahbickart:
following.

rocksuitcase:
following as well.
kindms and I have been using the oddball techniques using our variations given the gear we own since summer 2015. We are not recording classical but amplified PA's mostly loud RnR, but some Bluegrass/Americana. Without doing anything else I can tell you this method has validity both in your theory and in my practice of mixing these multi channel efforts. I have noted the "evenness" of audience applause (between song type) on several of the recordings while auditioning all 4 or 6 channels "raw" during the process of leveling between channels and setting up the stereo mixdown.

Turning the mic stand during applause at the beginning or end of a show certainly could be done, appearances be damned!     8)

Navigation

[0] Message Index

[#] Next page

Go to full version