...But that he calls it a Phased Array is a hint that dialing in the appropriate spacing is important due to the complex phase interaction between the four points along the line (the four mics on the bar).
Not exactly. It's called (I seem to recall Mr. Faulkner denied creating the name) a phased array because he lines up the mic capsules to insure the four caps are in phase. The reason for this is the wave lengths at the higher frequencies. I'm sure this crowd gets the implications.
Not exactly. Its not as simple as bringing the signals into phase. Allow me to explain what is going on phase-wise with this geometry.
The signals from the four microphones will be in-phase for any plane-wave with an origin perpendicular to the axis of the array, regardless of the microphone spacing. In less technical terms, that means sounds arriving from directly in front (or behind, above or below) with their source a long distance away will be in phase at all frequencies regardless of the spacing between microphones. This is the 'forward-gain' aspect of this array. And this forward-facing-gain aspect is compounded by the number of elements in the array arranged in a line perpendicular to gain axis.
However, for any sounds arriving from off center, there will be a complex phase relationship between the signals of the four microphones. Pick any to microphones in the array, and the phase-relationship of their signals will change based on 3 variables: the angle of arrival; the spacing between those two microphones; and the frequency in question. Change any of those variables and the phase relationship changes. Sounds from off axis don't get the same 'directional-gain' across all frequencies.
Here are the basic implications:
>The signals will be more in-phase at the lowest frequencies, and will have increasing phase difference at higher frequencies.
>The out of phase component of the signals will be closer in-phase for sounds originating near the median plane, and will have increasing phase difference at wider angles of arrival.
>The out of phase component of the signals will be more in-phase at closer microphone spacings, and will have increasing phase difference at larger microphone spacings.
That's the case for
any two microphones spaced apart from each other.
When there are four microphones instead of just two, those complex phase relationships are multiplied by six. That's because there are then six pair relationships between the four microphones, rather than one pair relationship between two. So the phase relationships get incredibly complex away from the median plane.
That said, what I've read on this indicates that he gets the exact spacings by listening, and that the 67cm / 47cm spacings are a good place to start. He doesn't encourage anyone to take those spacings as inscribed in stone.
^^
This is the practical take away, and what I was primarily attempting to convey in my previous post. He's listening while adjusting the spacing, and that's the only practical way to optimally arrange things.
It's very informative to listen while actively changing the spacing of a single pair of omnis. I encourage everyone to try this themselves, listening to the performance with headphones while varying the spacing of the two omnis (most here won't have an assistant who can slowly vary the spacing while we are listening on a speakers in some isolated room). I think many recordists think of microphone spacing as just affecting left-right imaging and other SRA aspects, but having done this myself a number of times, I find the tonal and 'textural' aspects are often more significant. Any one here who tries it is likely to hear this immediately, far more clearly and obviously than the SRA changes, especially when listening
while making the spacing change. Comparing short segments recorded with a few different spacings is also helpful and often the more practical way to do this, but the feedback-loop and mental association with this relationship is far less direct.
One can hear these complex phase relationship shift up and down in frequency. There will be frequency specific reinforcement at phase angle differences near zero and multiples of 360 degrees, and attenuation near 180 degree phase angles and multiples of 180 degrees. At frequencies where the first cancellation and reinforcement happens, the attenuation and increase is quite audible. It isn't heard so much as a level differences at higher phase angle differences where phase rotation happens at ever-increasingly narrow frequency bands, but the 'texture' and 'diffusivity' changes if you'll allow me those subjective descriptors.
This is likely to explain what you are hearing and EQ adjusting for in the CM3 portion, 2manyocks. If the spacing was being actively changed, you'd be likely to hear that particular aspect you are compensating for with the EQ notch shift upwards and downwards in frequency along with the change of spacing. When one is listening while setting up, a big part of 'tuning' the spacing is 'tuning' those phase/frequency relationships.