Become a Site Supporter and Never see Ads again!

Author Topic: Thoughts on mic correction, specific to "what we do" and "how we do it"  (Read 2767 times)

0 Members and 1 Guest are viewing this topic.

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Last weekend I played around with using the white-noise-like sound of applause in the hall as source material for more closely EQ matching the perceived response balance between four microphones in a recording array I'm using.   I've long been aware that the particular way I have these microphones mounted affects their responses, making them less flat than their native measured response in free space, and in different ways for each pair.  Beyond that, I'm not using a closely matched set of four microphones, so there are slight but perceivable frequency-response variations and somewhat larger sensitivity variations between each microphone to begin with. 

Listening back and adjusting things later, I typically level balance and EQ by ear informed by memory, hunting for what sounds most natural and pleasing.  I do so by soloing each channel and adjusting for anything egregious in isolation before checking as stereo pairs, then adjusting further as necessary with them all in use together. The later stages are an iterative process, and it is clearly apparent when the adjustments approach optimization- everything snaps into place in a natural and relaxed way and I find myself transported back to the time and place of the performance rather than noticing particular attributes of the reproduced sound which aren't quite right.  This process improves not only the channel to channel level balance and overall timbrel balance but also the quality of imaging and the general impression of realism.  The entire process ends up correcting for the particular response of each microphone itself, the response effects relating to how they are mounted, as well as the particulars of the music, musicians, instruments, room, etc. 

This time I used the same process, but instead of level balancing and adjusting EQ while listening to the music itself, I did so while listening to the applause prior to and after the piece being performed, adjusting for naturalness and uniformity of applause timbre.  This worked rather well, allowing me to more quickly get a good basic level and EQ balance between all channels so that when I switched to listening to the music itself the needed adjustments were already 80% there and I could more rapidly home-in on what was most natural sounding and correctly balanced. 

I surmise this is due to a few attributes peculiar to recorded applause (perhaps classical applause stereo-typically, in that it seems to be more uniform, steady, and extended in time than the applause in other musical genera).  Those attributes being: a relatively balanced, wide-spectrum source of noise; relatively even source distribution throughout the space so that it acts as a diffuse source; and a relatively even balance between impulse and steady-state noise components.   I sat for a while considering the implications of seeking out highly diffuse noise environments in which to make recordings used specifically for calibration purposes, and what such a process would involve.

Here's the basic flow chart of what I'm doing now-
Raw recorded microphone outputs > channel balance and EQ corrections as necessary > corrected individual channel source material ready to be mixed/mastered

What I'm proposing is breaking down the middle correction part (in italics) into a couple separate sequential steps like this:

Raw microphone output > corrections for individual mic variation and their mounting > additional corrections as necessary > corrected individual channel source material ready to be mixed/mastered

Once determined, that first correction step can be reused for all recordings made through this setup until the microphones or the array in which they are mounted are changed.  This thread relates specifically to that first correction step.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
I'm continuing this discussion from the Team Classical part 3 (open discussion of all things classical music) thread, where I posted an initial thought about it before deciding it would be best as a separate thread, since it applies generally and isn't specific to classical music recording.   In that thread, Jimmie C made the following comment , which I'll quote here to get the conversation going-

^ Made me think may be some of us with non matching capsules or brand of microphones could use a calibration file (eq, gain, etc).  Through a full spectrum speaker, what if you played a pulse (what ever desired time length) that sweeps between frequencies 20 Hz and 20 kHz.  We could recorded this at home using multiple microphones that are setup the same.  You could then pick the microphone with the best response and eq, amplify, etc to get the other microphone(s) to match the first microphone response.  One would be the dB and then match the FFT plots.  Then amply this post processing to the microphone after every recording.  I'm pretty sure in Audacity you can create a such pulse and I'm would image in other programs too.  It has been awhile since I have recorded anything so have not used Audacity in probably a year or two.

The process I'm proposing is similar to what you are suggesting.  Indeed, that is basically how the Tetra-Mic is calibrated.  Tetra-Mic is an ambisonic microphone using four coincidentally-mounted capsules which requires extremely close matching between capsules for the ambisonic matrixing to work correctly. Core-Sound provides a calibration file with each microphone, which contains corrective filters for each capsule determined via methods similar to what you've outlined.  The raw recording needs to be made using the same gain across all four input channels.  The result is recorded "A-format" 4-channel microphone output, which includes all individual microphone capsule variances. Afterwards the A-format output is sent through the respective corrective filters and saved as corrected "B-format" files in which the response of all channels are fully matched.  The B-format material can then be manipulated to point virtual microphones of whatever pattern in whatever direction one chooses.

But there are a few important differences, which I'll cover in the next post.. 
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Instead of adjusting the measured response of each microphone individually in isolation in reference to a test signal reproduced through a speaker, this proposed method relies on recording a fully diffuse sound source through the entire recording rig, just as it would be used for music recording.   At least one channel needs to be subjectively EQ balanced.  The other channels can also be subjectively balanced with respect to the first (both in terms of level and EQ), or they could be deferentially matched to the first using an auto-matching EQ or perhaps something like Audio DiffMaker.  That may be advantageous in that if slight subjective changes are deemed necessary they only need be made to the first channel, and the matching EQ or difference tool is then used to match the other channels to the first. Otherwise just balance and EQ them all to be as close as possible by ear.

Doing it that way eliminates a few potential problems and is beneficial in other ways. It eliminates many of the measurement hassles: the need for making a test signal; the need for a truly flat speaker source (or calibrating a speaker using a flat measurement mic); difficulties of assuring measurement of all mics is made at exactly the same point in space with relation to the speaker without other environment variations; and undesired room or environmental responses in the test setup. 

It is beneficial in that it: corrects for the mic response "as mounted" (one of my primary goals) along with any variances through the entire recording signal chain; and corrects in a subjectively preferred way which is likely to be closer to the desired starting point for mixing

A hassle is finding a sufficiently highly diffuse environment and natural noise test signal to record for making the calibration.   I suspect it might be helpful to constantly rotate the microphone array during the test recording.  That would average the room response for all mics - as they each end up pointing in all directions over the course of the test recording.   A central location on floor of a large gym, warehouse, or other large public space would probably work.  A large recording venue may work, using applause as a diffuse source, although it might look funny spinning the mic stand or doing pirouettes during the applause while stealthing.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline noahbickart

  • Site Supporter
  • Trade Count: (28)
  • Taperssection All-Star
  • *
  • Posts: 1590
  • Gender: Male
  • So now I wander over grounds of light...
following.
Recording:
Capsules: Schoeps mk41v, mk4v, mk22, mk3 & mk8
Cables: 2x nbob KCY, 1 pair nbob actives, Darktrain 2 and 4 channel KCY extensions:
Preamps:    Naiant Littlebox, Naiant IPA, Naiant PFA, Sound Devices Mixpre6
Recorders: Sound Devices Mixpre6, Sony PCM m10
Home Playback: Mytek DSD 192> Adcom SLC 505> Marantz Ma500 (x2)> Eminent Tech LFT-16; Musical Fidelity xCan v2> Hifiman HE-400
Office Playback: Grace m903> AKG k701

Offline rocksuitcase

  • Trade Count: (0)
  • Needs to get out more...
  • *****
  • Posts: 4093
  • Gender: Male
    • RockSuitcase: stage photography
following as well.
kindms and I have been using the oddball techniques using our variations given the gear we own since summer 2015. We are not recording classical but amplified PA's mostly loud RnR, but some Bluegrass/Americana. Without doing anything else I can tell you this method has validity both in your theory and in my practice of mixing these multi channel efforts. I have noted the "evenness" of audience applause (between song type) on several of the recordings while auditioning all 4 or 6 channels "raw" during the process of leveling between channels and setting up the stereo mixdown.

Turning the mic stand during applause at the beginning or end of a show certainly could be done, appearances be damned!     8)
music IS love

When you get confused, listen to the music play!

Mics:         AKG460|CK61|CK1|CK3|CK8|Beyer M 201E
Recorders:Marantz PMD661 OADE Concert mod; Tascam DR680 MKI

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
rocksuitcase- I want to thank you and kindms for trying out some of the oddball mic technique stuff yourselves via your own variations on it, and for your honest critiques on how its working for you.  And I extend that same thanks to others here at TS experimenting with their own similar adaptations of those approaches.  It's been a fun path for me to follow- breaking the recording and reproduction problem down and thinking about what's going on and what really matters, testing and revising the ideas which spring from that process in the real world, and sharing what I've learned here at TS.  That journey and the conversations which spring from it is enough reward in itself for me, yet it's been really encouraging and exciting to find others beginning to apply some of those non-mainstream techniques and ideas over the last several years.  Its rewarding personally, but in the bigger-picture it helps "close the knowledge loop" to get outside verification of what is working well and what isn't for others besides myself.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
I'll make this clarification before we venture too far down this rabbit hole..

I don't discuss stealth recording much around here, but I suspect that in some ways this will apply more to stealth techniques than open taping.  Primarily due to the ways in which the microphones are mounted and the effects from that.  With open taping the mics are placed in free-space on a stand and used more or less as they've been designed to be used, if typically at considerably greater distances from the source than their primary design intent, or at least with regards to how most folks other than tapers generally use them.  By contrast, with stealth recording the microphones are usually not mounted in free-space but mounted in ways which directly affect their response - in close proximity to other objects and frequently with other things and materials in the direct sound path.

What still applies broadly is the idea of improving things within achievable limits by correcting the response of mics which are not quite matched or are not performing to specification, as well as correcting for muffling windscreens and/or responses which may be fully to spec yet are undesirable - say taming a HF response bump which one might find objectionable even though one otherwise likes the mics, or reducing an upper bass emphasis or whatever.  A base-line correction - whatever correction one would always want to make when using a particular gear configuration, regardless of any further specific changes one might want to apply to a particular recording.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
On the rotating the mics idea-

When recording the diffuse noise signal to be used for determining the corrective filters, the recording will need to be of some minimal length.  It needn't be overly long, but long enough to effectively average the response over time.  Imagine a one second calibration recording.  One channel may pickup a solitary nearby clap while the other(s) are only registering more distant diffuse applause.  Obviously using that recording for matching the response of the microphones to each other and to the desired base-line response isn't going to work.  If the recording is, say, 30 seconds long, then enough claps end up being recorded by all channels that the individual peaks begin to average out.  Yet one may still end up with Ms. Superclapper on one side and Mr. Delicateclapper on the other (sound familiar?).  By rotating the array for the duration of the recording used for calibration, those spatially differentiated discrete sources end up being spread evenly across all channels.  In addition, the early-reflections and any non-uniform ambient hall sound is also directionally averaged. So we need the recording to be long enough, and the rotation extensive enough, to effectively eliminate all directional information.

If doing the adjusting and balancing by ear, the required length is simply whatever loop length is comfortable to listen to as it repeats over and over while the subjective EQ and level adjustments are made.  Using a matching EQ, there is typically a minimal sample length needed for analysis, similar to noise-reduction routines which use a noise-sample.  Typically longer samples make for more accurate matching, for the averaging reasons described above.  I've not tried any auto-matching EQs myself, and I'd like to hear from anyone here who has experience with them.

This idea about rotating the mics is all about achieving sufficient spatial averaging.   Its an extension of using a diffuse sound source as the test signal to begin with, effectively making the real world sources more fully diffuse than they otherwise would be.  A truly fully diffuse situation is rare.  Think echo chamber.  Rotation makes the applause direct sound impulses and early reflections pseudo-diffuse, and I suspect averaged sufficiently for these purposes. 

Technically, given an isolated impulse, one can truncate the initial direct sound and early reflections leaving just the reverb tail.  That reverb tail is diffuse.  That's one way speaker builders and room tuners determine the in room power response curve without the direct sound and early reflections.  But that requires a clean impulse and recording of it, and more computer work in an editor.  It also doesn't average all channels in the same way if the reverb part isn't really fully diffuse.   Best I think in this case to average all sounds by turning around in place a bit.
« Last Edit: May 18, 2017, 04:32:12 PM by Gutbucket »
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline MIQ

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 194
  • Gender: Male
    • Stereo Mic Tools
Interesting topic and ideas Gut

I wonder how the corrections you make to the mic responses in one room may apply to other applications of the same mics in other rooms. The room response at the recording location will still be "trapped" in the  recordings even for diffuse sounds and rotated mics. The spatial averaging will help give you a better idea of the room response at the recording location but it will still have an effect that is likely going to be different than the next room you use these mics in.

The idea of using the applause reminds me a bit of the Smaartlive tuning software that allowed you to tune the room while the show was in process using the music being played and not the usual pink noise or chirps. Helped to correct for initially tuning the venue without having all the bodies (audience) in the room. The tuning of the venue without all the sound absorbing crowd is not optimized once the audience arrives.

I wonder if using an FFT of the applause that has been subjectively eq'd for the first mic can be used as the "target" response for the rest of the mics. This could speed up the determination of the required corrections of the other channels. For a look at some well thought out automated EQ capabilities, take a look at the RoomCapture software from WaveCapture. http://www.wavecapture.com

Miq

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Hi MIQ,

The idea is to exclude room effects as much as possible.  I want to separate these base-line corrections which will not change (those specific to the microphones and recording array) from the corrections which will change (those specific to the performance, PA reinforcement, the room, and the recording location in it).  Two reasons I think applause may be an especially optimal test signal for doing this is that applause is relatively well-distributed throughout the room (occurring both close and far from the recording position at the same time) and a is a constant excitation signal (not a single impulse or sweep), as well as being broadband.  I think those factors in combination should minimize the room influence by effectively burying it.  One could compare a test recording made of applause at an outdoor even verses an indoor one to verify that.

The Smaartlive analogy is apt, partly in that it is sort of emulating how we typically go about this- correcting things while using the music itself as the test signal.  But quite unlike music (or some other test signal) reproduced through the PA, applause is randomly distributed throughout the space. 

There is something of a cool taper parallel here which doesn't escape me- to systems like Smaartlive, the Meyer SIM system which predated that, and the early pioneering work of Don Pearson in tuning the Grateful Dead sound system decades ago which lead to these types of systems.


Yes, applying the FFT "target" response from one microphone to the others is the idea if automating the matching part instead of doing it subjectively by ear.  Basically the auto-matching EQ thing I mention above.  Thanks for the link, I'll check out the RoomCapture site when I get a chance.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
One thing about auto-matching the response of one channel to another using FFT or whatever is that when using more than two microphones, the mics may be of different types and require some tailoring of the matching response.  There still may need to be some manual modification of the curves.  For example- it wouldn't make sense to try and modify the low frequency response of a bidirectional or supercard to perfectly match that of a pressure omni.

I don't see the biggest advantage of FFT or auto-matching EQ (probably the same thing) in "speeding up the determination of the required corrections of the other channels", as it only need be done once until the microphones or setup of them changes, at which point it would need to be redone.  The bigger advantage I suspect will be a closer match than one can easily achieve subjectively by ear, especially if one is not especially skilled at EQ and disposed to the critical listening required to get that right.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Here's the basic flow chart of what I'm doing now-
Raw recorded microphone outputs > channel balance and EQ corrections as necessary > corrected individual channel source material ready to be mixed/mastered

What I'm proposing is breaking down the middle correction part (in italics) into a couple separate sequential steps like this:

Raw microphone output > corrections for individual mic variation and their mounting > additional corrections as necessary > corrected individual channel source material ready to be mixed/mastered

In reality, there are additional corrective steps which need to be made beyond these which should be acknowledged.. and perhaps some of them included as an additional step here.   For instance, one can't make appropriate subjective EQ decisions without a trustworthy monitoring system.  That's critical but beyond the scope of this proposed process and thread, and it's importance in relation to this process is somewhat mitigated by the fact that what is important here is minimizing the differences between channels.  Subjective sweetening intended to translate in an objective way to other playback systems is a later, separate step, and that's were a trustworthy monitoring system becomes is important.   At this step we can deal in differentials rather than absolutes.

But once the raw microphones responses are corrected, there may be other corrections which also remain constant and could be applied prior to the more subjective performance, PA reinforcement, room, and recording location related corrections.  Here's where I'm coming from- I use arrays of more than two microphones.  I always use a center mic in addition to left and right mics, along with one or several often rear-facing ambience/audience mics.  I EQ and level adjust the center somewhat differently from left/right and the ambience mics even more so.  Those adjustments are relatively universal across all recordings.  If they are universal enough, I could apply them as a follow up step to this mic/array matching process, prior to the more subjective decisions made in the mix process.

That's covered by the additional corrections as necessary part here, where the subjective decisions which vary from recording to recording are made in the mix/mastering stage-
Raw microphone output > corrections for individual mic variation and their mounting > additional corrections as necessary > corrected individual channel source material ready to be mixed/mastered

In some ways, the question becomes a practical one of how far to break it down.  How many of these corrections can I pre-determine so they can be easily applied without having to think about them much each time, so that I'm free to concentrate on the subjective things which change from show to show.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Although out of the scope of this thread, the corrections mentioned in my previous post which make a monitoring system more accurate are in many ways the opposite-end-of-the-signal-chain equivalent to what I'm proposing here.  Similarly, once those corrections have been determined and applied, they do not need to be re-addressed until the monitoring system is changed in some way.  And similarly, they are corrections to transducer/acoustic transfer functions. 

It's the way they need to be measured which is different, because the first (the microphone corrections) are acoustics>transducer and the second (the monitor corrections) are transducer>acoustics.  This is another way of looking at the parallels MIQ draws with PA correction software, which is essentially monitor/room-correction on a large PA scale rather than a small studio scale.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline MIQ

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 194
  • Gender: Male
    • Stereo Mic Tools
Gut,

You are so right about not trying to match the entire response of one type of mic to a different type of mic that may have a very different freq response.  One of the nice things about RoomCapture is the software has this in mind and is part of the work flow.  Once you define the full bandwidth "target" response, you can define a smaller bandwidth of the target response that you would like to match a different microphone to. You can easily high pass, low pass, bandpass the target response to match the natural response limits of the next mic. 

Of course the software is geared toward the Transducer --> Acoustics side of this equation but the ideas are the same.  You don't want to try to make the tweeter array you are tuning match the target response down to 40Hz.  You simply tell the automated EQ algorithm "match this target but only over the range that makes sense for this transducer".  I see a lot of parallels on the Acoustics --> Transducer side with highly directional bandwith limited mics vs omnis that can reach down to infrasonic freqs.

The other thing I'm sure you've noticed is that making cuts in the response, even fairly deep high Q cuts are MUCH less obvious and intrusive than making boosts in the response.  Knowing this, RoomCapture lets you to define a different range of allowable Qs and dB levels for cuts vs boosts.  In a similar fashion the number of EQ points you have at your disposal can be changed and the accuracy of the match to the target can be traded off.  If you have a bunch of EQ to throw at the correction, you can get the EQ'd response to very closely match the target.  If the differences are big between the target and the raw response, and you only have 3 EQ points, you won't necessarily be able to create a super close match. 

And finally, often the automatically corrected response will not be exactly what you like or be the only solution you want to audition.  RoomCapture makes it very easy to manually change the automated EQ points or add more "by hand" after the automated routine is finished.  To be realistic, the software can get you 80-90% of the way there very quickly, but you will always need to critically listen and fine tune by ear to get it optimized.  Human perception of sound is much more complicated than just freq response...

Sorry if this is coming off as an ad for RoomCapture, I don't work for them, but have used the software enough to realize some of the important elements they are including to make it a very useful tool.  As we've been discussing, the issues and possible solutions on the Transducer --> Acoustics side are similar to this discussion on the Acoustics --> Transducer side.  A sort of acoustic reciprocity.  Wanted to share my thoughts that may translate to either side of the equation since many here may not have that perspective.

A great source of info on final tuning is Bob Katz's "Mastering Audio" book.  It extensively covers EQ and Dynamic processing of whole mixes.  Lots of great info and ideas from someone who done a few EQ tweaks...

Miq

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Thanks for joining the discussion.  It's the basic process I'm most interested in exploring and discussing before delving into various process details. Especially since one can do this stuff without those tools, by simply saving and applying EQ filters determined carefully by ear. FFT based frequency matching algorithms can certainly be applied to help in doing this and may be the best way of going about it.  I just don't want to bog down the discussion with in the particulars of specific programs and how to use them at this point.   

Bob Katz's book is a good one. An enjoyable and easily approachable overview of audio mastering.  My only slight frustration with it was that he doesn't go into much detail about the particulars which apply somewhat uniquely to live recordings being made here at TS.  The kind of simple or more complex corrective stuff we primarily need to do.  Its far more targeted at the wider world of mastering multi-tracked in-studio recordings. I corresponded with him a bit years ago after the first addition asking for details on specific techniques relating to what some might refer to as purist location recordings, where we don't have the same flexibility to remix, and are dealing with somewhat different acoustical problems than those of studio recordings.  Not sure if anything along those lines has been added to the later additions or not, I only have the first addition.

volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Online jnorman34

  • Trade Count: (7)
  • Taperssection Member
  • ***
  • Posts: 598
  • located in the lush willamette river valley
An excellent discussion.  Thanks gut, for all your efforts and thoughts on this subject.  Fun reading!
jnorman
sunridge studios
salem, oregon
Capture: Schoeps CMC64s/SKM183>Sound Devices Mixpre6
Post: Reaper 5.52 on Lenovo Yoga 910, i7-7500U

Offline MIQ

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 194
  • Gender: Male
    • Stereo Mic Tools
Hi Gut,

Ok, not trying to bog the discussion down with specifics of RoomCapture but you asked early on in the discussion for experiences using automated EQ.  Thought I'd share a few relevant aspects.  But hey, this is your thread, so you lead and I'll follow.   ;D

To get back to your basic premise, I still think you will have a very hard time determining the differences in the mics inherent responses or differences due to mounting/array influences if you are using recorded signals from a specific room.  I don't fully follow the reasoning that since the applause is diffuse and originating throughout the venue that it will somehow negate the influence of being in a room. 

The fact that the applause is diffuse is not only because is is distributed throughout the venue but also because it is reflecting off all the room boundaries.  How is it that the applause "test signal" becomes room agnostic when it is being generated in a room?

Also do you worry that the applause test signal is originating mostly from places in the venue that are different from where the music source will be (stage and PA speaker locations)?  Don't you want to determine the influence the different mics show to sound originating from the same basic location as the sound sources you are truly interested in recording?  Even if the mics are "far" from the performance sound sources and are picking up more than just the direct sound, the room effect on those performance sound sources will be different than the effect the same room has on sounds emanating from locations different than the performance sound sources.  This is especially true at low frequencies. 

To me it seems like you could be pretty successful at determining the basic differences between the mics when positioned in their array by doing more controlled testing, even at home.  A calibrated mic (or even a nice quality Omni made for field recording) used for comparison to the mics in the array mounting and using gated measurements will yield info you can apply to every recording you use this same array for.  Yes it requires some time and care, but so does rotating mics during applause and making freq response comparisons.  You are already using some of the same tools and techniques, why not try to control the test set up more? 

I think your idea is interesting but I'm not convinced your method will yield the results you are looking for.  Maybe (probably) I'm misunderstanding but the goal is a set of corrections you can apply to the mics in the array every time it is used, regardless of the room they are in right?  If you are using signals that are averaged over a long time in a room to determine these differences, it willl be difficult to eliminate the rooms effect.  Is that slight rise from 200-500Hz due to the mic mounting, or due to the room???

Miq

Offline DATBRAD

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 2126
  • Gender: Male
What I got from the OP was the result would hold true every time, for each venue, not across a number of different ones. Either way, I think there are always too many variables to take away anything substantial from the type of tests described. Too many variables that can impact sound quality. How could barometric pressure affect the sound waves? I don't know, but I've considered before that since low pressure essentially means thinner air, differences in barometric pressure could be audible. We already know that relative humidity and temperature play a role in how transducers perform. But there could be a difference between a show during the summer months and one on a bitter winter night, from the tons of jackets, fleeces, sweaters, etc, piled up in, on, and between seats. Could that add to the absorption of standing waves, since headcount alone already impacts that directly? I probably asked more questions that I've helped answer in this thread but the main thing is a bunch of tapers are now going to spend the rest of the weekend wrestling with these ideas in their minds, fully engaged. And isn't that one of the things that makes this hobby so continuously fulfilling? Thanks Gutbucket!
AKG C460B w/CK61/CK63 or Beyerdynamic M201TG>Luminous Monarch XLRs>SD MP-1(x2)>Luminous Monarch XLRs>PMD661(Oade WMOD)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Thanks for joining in everyone. Not trying to squelch specifics on RoomCapture, MIQ. I just want to get at the basic outline of how auto-matching EQ functions or those kinds of measurement/analysis tools may apply, rather than getting too deep into the specific details of each at this point.  And as that one in particular is listed as a $900USD tool, it's out of the range of most users here, myself included.

The easy answer to avoid room response specifics is to simply make the applause recording at an outdoor event, completely eliminating most of them.  But I don't think that's necessary, and here's why- Consider close-mic'ing of instruments in comparison to "taper distance mic'ing".  In the close-mic'ed source, the proximity to the source makes the direct sound almost completely dominant over the reflections and reverb tail.  The room sound is in the signal, but at such a lower level that it has little if any influence, being masked by the direct sound.  I see applause being is similar.  I'm suggesting recording from a position within the audience itself, in close proximity to the distributed sound sources - at least those immediately surrounding to the recording position.  Also, the applause signal is more or less constant, it does not energize the room like a single impulse transient then decay away revealing the masked reverberant modal stuff.

More later on this and DATBRAD's thoutghts, gotta run..
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Also do you worry that the applause test signal is originating mostly from places in the venue that are different from where the music source will be (stage and PA speaker locations)?  Don't you want to determine the influence the different mics show to sound originating from the same basic location as the sound sources you are truly interested in recording? 

No.  The intent of this corrective step is quite the opposite - eliminating the influence of those particularities as much as possible, precisely because those particularities vary - from venue to venue, by recording position, and by a number of other variables.  This base-line correction is intended to be limited in scope to the particularities of the microphones, microphone array, and recording signal chain only.  Anything beyond that can be addressed by a separate correction step or steps.

Now If one regularly records from the same spot in the same venue with the same PA setup and house EQ, and wants to figure an additional base-line correction specific to that situation, one could figure a following corrective step limited in scope to those venue and recording location specifics.  Doing that might be worthwhile to make homing in on what's needed for correcting reoccurring recordings a bit quicker, since the corrections specific to that recording position in that room may always  be approximately the same.  We already do this to some extent by memory when we think to ourselves "usually when using these mics in this way I need to correct for their excessive of 12Khz empasis with a peak filter", which is a microphone-specific base-line correction which may apply to all recordings we make with those microphones (and in a particular setup), or "when recording in this room, I usually need to notch the lower-mid/upper bass region around 500hz and reduce the bottom end with a shelf-filter around 100Hz", which is a secondary venue-specific correction.

Quote
To me it seems like you could be pretty successful at determining the basic differences between the mics when positioned in their array by doing more controlled testing, even at home.  A calibrated mic (or even a nice quality Omni made for field recording) used for comparison to the mics in the array mounting and using gated measurements will yield info you can apply to every recording you use this same array for.  Yes it requires some time and care, but so does rotating mics during applause and making freq response comparisons.  You are already using some of the same tools and techniques, why not try to control the test set up more? 

Sure, go for it.  I'm somewhat simplifying this process by just adjusting for "natural sound" by ear to keep everything simple (even if my discussion of it is not  :P) and more subjective than analytical.  After all, naturalness of sound and a good starting point for creating a convincing illusion is the true goal here, not a technically measurable flat response.  Sure, with that goal in mind, comparison against a calibrated measurement omni in free space may be valuable (either at home in the studio or at the venue with applause as the test signal).  It's a more abstract approach of inverted curve-matching though, and I suspect the extra variables of using limited bandwidth home stereo speakers within a small reverberant room, in combination with the details of mounting the mics to avoid the room influences will introduce a lot of potential error points.  By contrast, I found it quite quick and easy to dial in a natural applause sound by ear which applied nicely to the music.  But for those who enjoy a more technical approach to, the other path may get you there too. 

I guess it comes down to how much work do you want to put into this and do you find figuring out and implementing the process enjoyable?

volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
What I got from the OP was the result would hold true every time, for each venue, not across a number of different ones. Either way, I think there are always too many variables to take away anything substantial from the type of tests described. Too many variables that can impact sound quality. How could barometric pressure affect the sound waves? I don't know, but I've considered before that since low pressure essentially means thinner air, differences in barometric pressure could be audible. We already know that relative humidity and temperature play a role in how transducers perform. But there could be a difference between a show during the summer months and one on a bitter winter night, from the tons of jackets, fleeces, sweaters, etc, piled up in, on, and between seats. Could that add to the absorption of standing waves, since headcount alone already impacts that directly?

Yes, those things influence whatever further corrections we might want to make after the base-line corrections specific to just the microphones and setup have been applied.  There will always be room for further adjustment and subjective sweetening if we have the inclination to do so and should we feel they are needed.  This should help make that decision and those choices clearer and easier. 

And if one doesn't enjoy the subjective application of EQ or doesn't want to have to commit the time and effort to doing that for each recording, it could serve as a way of getting better sounding recordings with minimal effort each time.  If a good base-line correction can be found which "always applies", those who don't want to do be bothered with excess post work attention could just apply this corrective filter set by rote and get pretty close.  Put in a bit of effort up front to figure out the base-line corrections in order to avoid having to do specific EQ each time.. unless one want's to do so.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline DATBRAD

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 2126
  • Gender: Male
Sorry for drifting off topic, but EQ has been something of a touchy subject for me. It's a bias that goes back to my analog taping days when EQ was frowned upon because it wasn't considered "pure" to do anything except hook a pair of high end Nak, Denon, or HK cassette decks together and making as close to an exact copy as possible with the master tape. Dolby was another sensitive subject as some believed in decoding on the playback deck being copied from, and others (myself included) followed the logic that if the master was encoded in Dolby, all subsequent copies should be made with Dolby turned off, since the encoding still passes through to the copy. But I digress.....EQ was something I felt made a recording tailored to the playback system of the person doing it, and that may sound terrible on another person's system.
Today I only use EQ for a recording that is almost unlistenable without it due to extreme bass, or muffled highs. I'm sure many recordings I've made would benefit from some very light EQ that would translate on any playback system. There are tapers that EQ almost every recording they make with great results. I just can't seem to get the old taper in me to make that part of my standard workflow.....oh well...
« Last Edit: May 23, 2017, 07:18:08 AM by DATBRAD »
AKG C460B w/CK61/CK63 or Beyerdynamic M201TG>Luminous Monarch XLRs>SD MP-1(x2)>Luminous Monarch XLRs>PMD661(Oade WMOD)

Offline kuba e

  • Site Supporter
  • Trade Count: (1)
  • Taperssection Regular
  • *
  • Posts: 79
  • Gender: Male
Thank you for your posts. It is very interesting.

And if one doesn't enjoy the subjective application of EQ or doesn't want to have to commit the time and effort to doing that for each recording, it could serve as a way of getting better sounding recordings with minimal effort each time.
Great. This is my case.

Is this procedure useful when I record open with one pair of matched microphones? When I record stealth, I will never have exactly the same microphone placement, should I make this correction separately for each recording? What is the exact procedure - to divide a stereo track with applause into two channels and try to eq one channel to sound the same way like second?

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
Thanks guys.  Although it could apply, this probably won't be as useful for matched microphones used in open setups.  And the setup likely would need to be pretty similar each time.  How similar? I'm not sure, but a few different recordings of applause with the same rig incorporating typical variants in the setup could be compared to see how different they are from each other, and conclusions drawn from that.

I'm out in LA getting ready to help a friend drive back across the country.  I'll be offline for about a week or so.  I'll post more thoughts and comments when I get back, including your EQ concerns Brad.  Feel free to keep up the conversation while I'm away if you like.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

Offline kuba e

  • Site Supporter
  • Trade Count: (1)
  • Taperssection Regular
  • *
  • Posts: 79
  • Gender: Male
I'm out in LA getting ready to help a friend drive back across the country.  I'll be offline for about a week or so.  I'll post more thoughts and comments when I get back, including your EQ concerns Brad.  Feel free to keep up the conversation while I'm away if you like.

I wish you a beautiful trip across the States. I did the same trip 15 years ago, it was great.

Thanks guys.  Although it could apply, this probably won't be as useful for matched microphones used in open setups.  And the setup likely would need to be pretty similar each time.  How similar? I'm not sure, but a few different recordings of applause with the same rig incorporating typical variants in the setup could be compared to see how different they are from each other, and conclusions drawn from that.

Every help and ease of setting the eq is good. I usually record open in small clubs, but if I record in larger spaces, I'll take the process.

I also suppose I should be in the middle of the audience. Perhaps we will be limited by the fact that direct applause will only come in the horizontal plane. The sound from below and from the top may depend on the room acoustics as MIQ mentioned.

Offline MIQ

  • Trade Count: (0)
  • Taperssection Regular
  • **
  • Posts: 194
  • Gender: Male
    • Stereo Mic Tools
Safe travels Gut. 

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 11577
  • Gender: Male
revisiting this thread..  and taking a step back to view the bigger picture.

Sorry for drifting off topic, but EQ has been something of a touchy subject for me. It's a bias that goes back to my analog taping days when EQ was frowned upon because it wasn't considered "pure" to do anything except hook a pair of high end Nak, Denon, or HK cassette decks together and making as close to an exact copy as possible with the master tape. Dolby was another sensitive subject as some believed in decoding on the playback deck being copied from, and others (myself included) followed the logic that if the master was encoded in Dolby, all subsequent copies should be made with Dolby turned off, since the encoding still passes through to the copy. But I digress.....EQ was something I felt made a recording tailored to the playback system of the person doing it, and that may sound terrible on another person's system.
Today I only use EQ for a recording that is almost unlistenable without it due to extreme bass, or muffled highs. I'm sure many recordings I've made would benefit from some very light EQ that would translate on any playback system. There are tapers that EQ almost every recording they make with great results. I just can't seem to get the old taper in me to make that part of my standard workflow.....oh well..

This is astute and totally reasonable. 

In my way of thinking we should at least conceptually split any corrections made into three separate stages:

The first is whatever corrections we would like to make for the microphones themselves and the way they are mounted.  That shouldn't change from recording to recording as long as the recording setup remains the same and that's what this thread is about. Once determined, it can be applied by rote, to get us to our "base-line-good" response of the recording setup without a lot of corrective effort each time.  I look at these corrections no differently than the choice of what microphone or mic setup to use, or where to setup in the venue.  These are setup questions and setup corrections.

The second stage is the mixing stage were we correct things specific to each particular recording, making subjective decisions about what sounds best.  There are two sub-parts to this one- the first is fixing obvious glitches and problems.  It not glamorous or fun, it's just getting to a problem free starting point. The rest is the more creative part, and mostly about making it sound as good as possible on our own monitoring system.  Maybe we aren't doing anything here but leaving it alone, fading and tracking.  Maybe we are normalizing, maybe mixing two pairs of mics, maybe doing a SBD/AUD matrix, maybe doing some EQ, dynamics manipulation, stereo-processing or whatever.  Separating the first step from this one makes this subjective/creative step faster, easier, more enjoyable.  This is stuff specific to each recording rather than the setup used to record it.

The third stage is the mastering stage.  This is where things relate to the outside world.  This is where decisions need to be made concerning making the recording sound good for everybody else, not just yourself at home. It concerns how others will listen. This one is quite tricky.  It requires very truthful monitoring so that the recording will translate correctly to other systems, as well as truthful listening.  Its also in someways destined to failure from the start- Do we want full range live concert dynamics which might only be appreciated on a big playback system? Do we want less dynamics suitable for listening in the car or otherwise on the go?  Will it sound good on a tiny blue-tooth speaker, earbuds, in the Ford Focus as well as the Lexus, on the big home theater with a subwoofer as well as the clock radio? Well, each of those situations ideally requires different mastering choices.  Format and distribution questions come into play here as well- FLAC, mp3, and for some like me, 2-channel stereo or multichannel audio.

It's that third mastering step I'm posting about here today, mostly because I can't find the other thread I'd started about these opposite-end-of-the-chain corrections which is where this post really belongs.  In that thread we were talking about releasing the raw legacy file (thus preserving the raw master warts and all) along with a corrective "difference file" which when combined with the original would apply all our corrections.  Various corrective files could be made, used and chosen from, acting like different mixes or remasters.  New versions could be made and applied to the original recording at any time. Problem is that each corrective difference file ends up taking up as much space as a new mix, so in the end we don't really save any storage space. Instead we can just store the original file along with each edited version like we do now, which is better because it eliminates the need to recombine them at the listening end.  We only need the edited version to hear what we want.  So the idea although conceptually attractive becomes less compelling in reality. But if instead of a full-sized difference file we could just store metadata along with the raw recording describing how we mixed it, what EQ settings and dynamics manipulations and whatever else we applied to it, we could basically do that without a big storage hit.  The key I think is to only apply that to the mastering stage stuff, not the mixing stage stuff.  We aim to make it sound as good as possible in a no-compromise situation, then the end listener can apply whatever mastering option works for their particular listening situation.  Full dynamics for home, squashed for the gym/subway-commute, whatever.  I would require some processing overhead at the player, EQ, compression, etc, as instructed by the metadata.

This video made me think about this again-  https://youtu.be/KHzD-fR2XUw?t=54m5s  The link points about 54 minutes into a Triangulation 221 podacast interview with Mark Waldrep of AIX records (he operates a label specializing in High Resolution recording) who talks a bit about the potential of this kind of flexible metadata mastering approach.  At first they are talking about re-mixing by the listener (which is solidly within the mixing stage of the 3 stages I've outlined), but they both acknowledge that not many listeners are interested in doing that although it's fun to play around with a few releases where that's possible (Todd Rundgren, Trent Reznor).  The bigger potential is one release which is adapted by the player to the listening situation, environment, and system.  Then one release can be adapted to work everywhere.  Preserving the raw master lineage in our case.

I haven't played further with the mic-setup correction stage this thread is intended to discuss, but plan to do so in the future.
volition > vibrations > voltages > numeric values | numeric values > voltages > vibrations > virtual teleportation time-machine experience

 

RSS | Mobile
Page created in 0.376 seconds with 50 queries.
© 2002-2017 Taperssection.com
Powered by SMF
Website Design by Foxtrot Media, Inc., a Baltimore Website Company