Become a Site Supporter and Never see Ads again!

Author Topic: minimize effect of shifted stereo image  (Read 7046 times)

0 Members and 1 Guest are viewing this topic.

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
minimize effect of shifted stereo image
« on: June 24, 2016, 06:29:38 AM »
Greetings all, thank you for your posts to the forum, they are very helpful.
I would like to ask about stereo image - If i place mics bar out of parallel with PA, can i correct it by shifting one channel of the stereo? Is this idea of the picture right? Probably it's possible only with omni.

The second question - If i must set up stand out of center, how to minimize the effect of shifted stereo image? Do you have any advice on selecting mics (omni, cards), config (wide or narrow stereo angel) or rotate mic bar?

Kuba
« Last Edit: July 12, 2016, 05:42:40 PM by Kuba E »

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #1 on: June 24, 2016, 12:21:48 PM »
I'll answer your second question first-

Once you've set your microphone rig up, fine-tune it's directional orientation by listening alone. Try to forget entirely what your eyes tell you and rotate the entire mic array so that its center-line points at the apparent acoustic center of the primary sources of sound you are focusing on.  That will often lead to a quite different orientation than you'd arrive at by pointing the array by eye.  If setup off to one side of center recording a typical PA amplified concert, that will usually translate to rotating the mic array to face more towards the closer PA stack and away from the visual center, even though you might think you'd want to point the microphones farther towards the other side to compensate.   Referring to your last drawing above, you'd end up rotating the array towards the left so that it points more towards the left PA than at a point directly between the PAs or at the right PA on the opposite side.

Now prior to that, when initially setting up you can make other compensations and determine your microphone array configuration based on what you are able to assess visually.  That is, consider things like your recording position in the room, the location of the sound sources to be recorded and distance to them, how wide the arc is in which those sound sources are distributed, the nature of the sources you intend to focus on, the nature of the sounds you'd prefer to suppress and the distribution of them.  Based on that, figure out how you'll setup your microphone array to best optimize for that particular situation..  or forget all that and just use your standard go-to recording setup such as ORTF, DIN or whatever like most tapers would do (maybe just switching between cardioids and supercardioids or whatever), but at least point the array towards the apparent acoustic center by ear instead of by what your eyes are telling you.


In another post I'll get into what's going on with the underlying relationships and how you can manipulate them to your advantage, via modifying the microphone setup and adjusting things afterwards using your DAW. It's mostly about trading intensity offset (signal level) against time offset (time of arrival), and an extension of the Stereo Zoom concept of Michael Williams.  If you are unfamiliar with the Stereo Zoom concept which optimizes the microphone setup for 2-channel stereo recording, then I recommend searching out the various threads here discussing it.  It is the next step in expanding upon simple standardized near-spaced microphone setups like ORTF, DIN or whatever, to adapt them to the particulars of the recording situation. Here's a link to William's Stereo Zoom paper hosted on the Redding Audio website- http://www.reddingaudio.com/downloads/Rycote%20Technical/The%20Stereophonic%20Zoom.pdf

The basic Stereo Zoom idea is a very powerful one for understanding what is going on with the arrangement of a stereo pair of microphones, and it takes some wrapping your head around to really "get it".  Even if you don't apply it directly and just use standard setups, it will help to develop a better understanding of the underlying relationships in play.  That said, the Stereo Zoom only deals with symmetrical arrangements between a centered recording position and the sound sources.   His more advanced papers extend the concept to multi-channel microphone arrays used for recording more than 2 channels, and most of those require ways of unlinking that symmetry so as to offset the center axis of primary sound pickup from the axis of the microphone pair as a way of seamlessly linking the sound pickup from multiple pairs of microphones in order to form one seamless surround array without excessive overlaps or gaps between them.  The gif attached below is extracted from one of his papers explaining that and hints at how it works for each pair when using directional mics such as cardioids.  All of William's AES papers are available for download free from his website- http://www.mmad.info/ His original Stereo Zoom paper (typewriter written in 1984) is included there, but not the cleaner, more approachable layman's version of the Stereo Zoom I've linked above.

More later.

Hope this helps more than obfuscates.  It's more than most here care to get into.  If nothing else, just forget all this technical stuff and simply rotate your entire mic stand to point at the acoustic center by ear.

« Last Edit: June 24, 2016, 12:55:25 PM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #2 on: June 24, 2016, 12:51:53 PM »
^ I think that was probably too much info, too fast.  Here are the quick answers which are probably more useful to you-

I would like to ask about stereo image - If i place mics bar out of parallel with PA, can i correct it by shifting one channel of the stereo?

After the recording has been made, the primary parameters which can be adjusted to shift the stereo image are the timing relationship between the two channels and the signal level relationship between the two channels, both of which you have control over in your DAW.

Quote
Do you have any advice on selecting mics (omni, cards), config (wide or narrow stereo angel) or rotate mic bar?

Rotate the mic bar (as I mentioned above).  Choose the microphone pickup pattern and stereo configuration based on other factors such as your distance to the source, the nature of the room, the audience, etc.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #3 on: June 24, 2016, 06:02:46 PM »
Hi Gutbucket,
thank you very much for your explanatory answers, you made it clear to me. I have to train hearing to capture good stereo image. I have read Michaels's Stereo zoom, it's great article. Thank you for recommending additional articles to multiple microphone pickup too. I can get there basis for better understanding this topic. Please can you recommend which paper with a multi microphone theory is good to start reading?

Quote
Rotate the mic bar (as I mentioned above).  Choose the microphone pickup pattern and stereo configuration based on other factors such as your distance to the source, the nature of the room, the audience, etc.

I understand answer. Just a theoretical question - if I will focus only on correcting mic array's center line that is out of acoustic center in post processing (neglecting acoustics, distance, SRA etc.), should i prefer omni before the cards? Your picture (microphone position intensity offset and time offset.gif) looks to explain what i'm interested. I will read Michael's papers.

Quote
After the recording has been made, the primary parameters which can be adjusted to shift the stereo image are the timing relationship between the two channels and the signal level relationship between the two channels, both of which you have control over in your DAW.

I would like to ask whether there is a risk of deterioration of the recording? Because i have not trained ears, and I do not know what should i listen.
If the volume is the same between the two channels, i can make probably only small adjustments.
I read this thread http://taperssection.com/index.php?topic=168478.msg2095996#msg2095996, where is written about recording out of phase. When correcting stereo image by shifting channels, can i cause recording to sound out of phase?
« Last Edit: June 25, 2016, 07:44:35 AM by Kuba E »

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #4 on: June 27, 2016, 05:55:48 PM »
Quote
.. if I will focus only on correcting mic array's center line that is out of acoustic center in post processing (neglecting acoustics, distance, SRA etc.), should i prefer omni before the cards? Your picture (microphone position intensity offset and time offset.gif) looks to explain what i'm interested.

That sort of puts the cart before the horse.  It sounds as if you are planning on having an unbalanced stereo recording to begin with, and choosing which kind of microphone to use based on how you might correct for that afterwards.  There are numerous good reasons for choosing omnis over cards or vice-versa depending on the recording situation, but ease in correcting for out-of-balance imaging afterwards is not usually one of those considerations, and if it was it would still probably not be near the top of the list.  It won't matter nearly as much which direction you point a pair omnis (with some it matters a little bit, with others not at all), so accidentally angling them improperly is usually going to be less of a concern and will have less audible consequences on the stereo imaging of the resulting recording than pointing a pair of directional microphones improperly.  Either way, you can use the same post-recording manipulation techniques to adjust the stereo balance of recordings made using either type of microphone.

Recording using different types of microphone directivities and array configurations will provide different types of stereo cues, prior to any post-recording manipulations.  The image I linked above shows a way of trading level differences against time differences by varying the setup configuration of a near-spaced pair of directional microphones, such as cardioids or supercardioids.   Doing that is possible because a near-spaced pair of directional microphones in a typical stereo configuration produces both time and level differences between the two channels.  But using that to one’s advantage in practice requires knowing what that balance is beforehand, and what effect those changes of the microphone configuration will have.  If you knew those things before recording you could choose a configuration which was already balanced and would need no further image balance manipulation afterwards.

Unlike a near-spaced pair of cardioids or supercardioids, a pair of spaced omnis used at a significant distance from the source cannot be setup in such a way as to vary the level relationship between the two channels.  And a pair of cardioids (or supercards, or some other directional microphone pattern) setup in a coincident configuration without any spacing between microphones cannot be setup so as to vary the timing relationship between channels.

Let’s back up a bit..
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #5 on: June 27, 2016, 05:56:02 PM »
The three primary "stereo" differences found between two stereo channels are level-differences, time-differences, and polarity-differences.  You need either level differences or time differences or both to have stereo.

For any given stereo recording, there may be no significant level differences between channels, or there may be simple level differences that are equal across all frequencies, or more complicated level differences which vary according to how far off-center the sound source was located relative to the center-line of the microphone array, and possibly more complex level differences which vary by frequency. 

Likewise there may be no time-differences between channels, or simple time-differences which consist of all events in one channel occurring ahead of the other channel, or there may be more complicated time differences which vary according to how far off-center the sound source was located relative to the center-line of the microphone array.

In addition to level and time differences (or the lack of them), there may be polarity differences between the two waveforms.  There could be a simple inverse polarity relationship where all the peaks in one channel correspond to troughs in the other (which is most probably a mistake, but easily fixed by inverting the polarity of one channel), or there may be more complex polarity relationship where some peaks and troughs have the same polarity and others have inverted polarity.

Now, when making a stereo recording using a pair of microphones, choice of microphone pickup pattern in combination with the microphone setup configuration is going to determine what kind of stereo differences will be present in the resulting recording.  As mentioned there may be level differences, and/or time differences, and/or polarity differences.

With complex waveforms it can be difficult to visually isolate and identify a singular sound event between both channels.  There are numerous waves superimposed upon each other which in their complex combination tend to obscure the individual waves. However, when looking at a visual representation of the stereo sound waves on the computer screen, zoomed in far enough to compare the individual peaks and troughs of the waveforms of both channels, a level difference is seen as a height difference between wave crests and troughs for the same event.  A time-difference is seen as a left-right alignment difference in the position of the wave crests and troughs (and the zero-crossing points between them) along the time-line. A polarity difference is seen as the waveform of one channel "going up" and forming a positive going voltage peak as the waveform in the other channel "goes down" forming a negative going trough in a sort of mirror-image fashion.

In all recordings, peaks and troughs featuring identical levels, identical zero-crossing times, and identical polarities or zero crossing directions correspond to sounds which will be heard as being in the center of the stereo image. 

A recording made with a coincident microphone configuration such as X/Y or Mid/Side will have a very simple timing relationship between both channels (assuming the microphones are arranged correctly so as to be closely coincident) which remains the same across all frequencies - that is, there should be no timing differences between the two channels at all.  All waveform peaks and troughs in both channels should cross the zero-voltage line simultaneously.  Only the level of the peaks and troughs should vary, and in some cases the polarity or vector direction of the wave will vary (that is, whether it's going up and forming a peak or going down and forming a trough).  Sounds which will be heard as coming from either side of center correspond to waveforms which peak higher in one channel than the other.

The waveforms of a recording which is made with spaced omnidirectional microphones placed a significant distance from the source is going to look quite different under close examination.  The level between the two channels is going to be very similar, meaning the peaks and troughs corresponding to each individual event are going to be very close to each other height, but the zero-crossings will not all line-up at the same time positions along the time line.  Sounds imaging off to one side or the other from center will be shifted time-wise between the two channels and their zero crossings will not line-up closely.  In addition, the phase-relationship between the two channels is going to be complex.  Very low frequencies will have a similar simple polarity relationship between channels to a recording made with a coincident microphone array, but high frequencies will have a quite random phase relationship causing some waveform events to be in or out of polarity with each other and shifted in time with respect to each other, and increasingly so for sound sources which are farther from center.

Near-spaced microphone configurations will produce relationships between the two waveforms which are a combinations of those opposite extremes.
« Last Edit: June 28, 2016, 09:48:28 AM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #6 on: June 27, 2016, 06:19:59 PM »
When investigating a stereo imaging imbalance on the computer afterwards, try listening to each channel in isolation.  Do the two channels sound about the same in terms of frequency balance, excepting of course that some sources dominating their frequency ranges are supposed to image further right or left?  If not, try EQ'ing each side separately in such a way that they sound similar and equally "good and correct" in terms of a natural frequency balance.  Then try simply adjusting the balance level between left and right and see if that gets you close enough.  Don't worry too much if the levels are a bit different between channels as long as the balance sounds good for all the elements of the recording. 

If you need to push the average level considerably higher on one side to get things sounding balanced left/right, then try shifting the predominant channel back in time slightly, probably just a few fractions of a millisecond (unless the timing was between channels was somehow off, or the axis of your spaced omnis was rotated well off center, in which case a millisecond or two may be appropriate).  You can do that by nudging one waveform forward or backwards with respect to the other along the time-line of the DAW.

If those things don't do the trick there are some plugins which shift image balance in some other ways, some of which allow shifting of the midrange separately from the bass and treble ranges, and other stuff like that.  Many plugin providers offer those kinds of plugins for free.
« Last Edit: June 27, 2016, 06:26:39 PM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #7 on: June 28, 2016, 05:33:35 AM »
Hi Gutbucket,
thank you very much for the valuable informations. I appreciate it very much. It is nice when someone can clearly explain the issues in more depth. I think it will be helpful for a lot of beginners like me.
English is not my native language. I will print your answers, i will read it in detail after work and i will answer.
« Last Edit: June 28, 2016, 06:00:12 AM by Kuba E »

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #8 on: June 30, 2016, 07:15:22 PM »
Thank you for the explanation of the theoretical basis. I read Stereo Zoom and your answers are a big help for a better understanding.
I would like to ask about the theory. I know it's a lot of questions, I'm sorry if some are useless or were already answered.

Level-difference:
What is the minimum level-difference causes the illusion that same stereo signal (with no time-difference) sounds like only of one speaker? Is it absolute value or is it determined by the sound wavelength (frequency)? I tried it with headphones,  it should be between 12-16 db.

Time-difference:
What is the minimum time-difference causes the illusion that same stereo signal (with no level-difference) sounds like only of one speaker? Is it absolute value or is it determined by the sound wavelength (frequency)?

Polarity-difference:
I listened stereo recording with two identical tracks through speakers. The result was a sound in midway between the speakers - mono. Then I listened to the same stereo tracks but one has flipped polarity - the result was the two source of sounds, one on each side of the speaker - special kind of stereo.
Is that right - the more wider spread of microphones, the more polarity-difference in the recording because it appears in the more low frequencies?

-----------------------------
Thank you for practical guidance. It works great!

This is for someone who is beginning like me:
I tried this instructions on recording with omni mics (spread 15 inch, angel 90° because mics spread - i have short bar, 50 feet from stage, small club). Mics were positioned off PA axis and it was angeled out of the acoustic axis too.  Procedure – I listened channels separately, one channel had more bass, but I did not eq. I delayed the dominant channel for about 0.4 ms and amplified second channel by 2 dB.
I am sending samples. It's not perfect, who has sensitive hearing, he would do it better. But maybe it will be useful for someone. It helped me a lot too that I've used software (Reaper), which allows these adjustments on and off during playback. It is thus possible to immediately hear the change.
sample original: https://drive.google.com/file/d/0B0lpmTYweTrEVGZRSXk1aW1qNHc/view?usp=sharing
sample edited: https://drive.google.com/file/d/0B0lpmTYweTrEeE1haDZ5Y1BZRFE/view?usp=sharing

Offline DSatz

  • Site Supporter
  • Trade Count: (35)
  • Needs to get out more...
  • *
  • Posts: 3347
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #9 on: July 01, 2016, 12:03:17 PM »
Kuba, congratulations on what you're doing. I hope it continues to be interesting and productive for you.

-- Your questions about equivalences between time differences (delay) and level differences can be answered clearly only for abstract, simplified cases such as a single pure test tone at some frequency such as 400 Hz. With actual program material, the correspondence is much harder to estimate because it depends so much on the content of the material. For example, a close-miked recording of someone playing cymbals (with very strong upper-midrange content and lots of transient energy) would be affected more obviously by timing and level shifts than a semi-distant recording of a men's chorus singing a series of sustained "oooh" and "aaah" sounds in a reverberant space. So, while there is at least one "rule of thumb"-type formula that lets you convert time shift into a roughly equivalent level shift and vice versa, it is only approximate.

-- Your observations about polarity inversion are spot on. Actually whenever any stereo playback system is set up, those tests should be done. You'd be amazed at the number of people, including some audiophiles, who have one speaker wired in opposite polarity to the other one and go for years that way, even though it makes such a big difference in the effect of many recordings.

Note that with many spaced-omni recordings, however, a listener can't tell when one speaker is miswired relative to the other one. Flipping the wiring alters the stereo effect, yes--but usually, neither version sounds clearly right or wrong; they're often just two more or less equally acceptable, alternative renderings of the recording. That's really something to think about--what does it tell us about spaced omnis as a recording method in general?

-- One thing is, though (if I can sneak one more concept in at this point): As you spread two microphones farther and farther apart, what happens is more subtle and complex than polarity change. If you imagine a single-point sound source that's in front of two microphones which are equally distant from that sound source, then the sound from that source will reach both microphones simultaneously regardless of how far apart they are. So there's no conflict and no cancellation (for the direct-arriving sound, anyway). However, if you spread the mikes apart, and move the sound source to where it's no longer on the center line between them, then the direct sound from that source will reach one microphone before the other by some number of milliseconds. As you already know, the earlier time arrival in one channel of the recording gives the listener a cue to the position of the sound source relative to the microphones.

But it isn't exactly polarity that we're talking about here; the signal received in the farther microphone isn't necessarily the inverse of the signal received in the closer one. In fact, the odds are very strong that it won't be. Instead, it will be some degree of difference in a continuous parameter known as "phase." Polarity is really a special case of phase.

This is based on the understanding that any sound in the real world could, with enough time and effort, be analyzed as some mixture of pure tones of various frequencies and intensities. You might need to postulate hundreds or even thousands of such pure tones to account for a single second of a car crash or a rocket launch or a baby's cry--but the theory says that the analysis can eventually be made as complete and precise as you wish it to be. You could add all the (correctly prescribed) pure tones together and the result would be as close to the original as you would like it to be (time and effort being the determining variables).

But for each of those pure tones (sinusoids), there is one further parameter beyond its amplitude and frequency, and that is its phase. Where, at your chosen time of reference (the onset of the sound, for example, or at any particular moment you may choose before or after that), is that sinusoid within its own ritual pattern of oscillation? Is it just beginning to rise from 0 amplitude toward its eventual positive peak, or is is already moving downward from that peak? Has it crossed the 0 line going downward again, heading toward its negative peak, or is it coming back up toward the 0 line and the next positive peak, etc.? Those questions merely break the cycle of that one pure tone's oscillation down into "quadrants" (as in basic analytic geometry), while in fact phase is an "analog" or continuous-value parameter. The phase of the sinusoid at that moment can have any value whatsoever between 0 and 360 degrees (or between -pi and pi if you're schooled that way).

So what is really affected by moving the source relative to the microphones, or the microphones relative to the source, is the phase of each of those signal components (pure tones) relative to each other, and relative to their counterpart(s) in the other channel of the recording. So when you want to gauge the effect of combining signals in various ways, in most cases you have to deal with that level of complexity (or a statistical summary of it, at least) rather than the more black-and-white-seeming issue of polarity.

-- All that being said, you are still exactly right when you say that the farther apart the two main microphones are, the farther into the low frequencies the discrepancies between them will extend. It's just that those discrepancies are a matter of degree (phase) rather than being absolute (polarity). And the corollary is that at higher frequencies, the farther apart the microphones are, the more the phase relationships tend to become random / diffuse / uncorrelated.

The crux seems to be that those discrepancies can be delicious at low frequencies while being confusing and even nasty sounding at midrange and upper-midrange frequencies. This is why some people (I don't know whether there are any on this board, but some people in Europe that I've heard of) record with coincident or closely-spaced pairs of directional microphones placed at the center, while mixing in the signals from a spaced pair of omnis, with "crossover" filtering so that the overall pickup comes increasingly from the spaced pair the farther down in frequency you go.

--best regards
« Last Edit: July 02, 2016, 03:56:09 PM by DSatz »
music > microphones > a recorder of some sort

Offline datbrad

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 2293
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #10 on: July 02, 2016, 07:36:23 PM »
After all the previous academic explanations that are so complete with such depth, my answer to the OP initial question will seem painfully layman, but here goes.....

From purely practical experience, I've developed a good sense of the illusionary aspects of stereo recordings off a two stack PA with a pair of directional mics when off center.

Say you were LOC, if you visualize the polar patterns and imagine what each mic is hearing, you can decide if you want to hear just the left stack or achieve a wider soundstage.

I tend to imagine the space between L&R speaker arrays divided into vertical thirds. Then I "aim" the theoretical center of the pattern at the point one third over from the nearer stack, and turn up the gain on the channel of the opposing mic and usually get a decently wide enough image that it's very difficult to tell the recording was made off center.

Good luck!!!
AKG C460B w/CK61/CK63>Luminous Monarch XLRs>SD MP-1(x2)>Luminous Monarch XLRs>PMD661(Oade WMOD)

Beyer M201>Luminous Monarch XLRs>PMD561 (Oade CMOD)

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #11 on: July 04, 2016, 10:43:37 AM »
DSatz, thank you very much for your help with theory. I understand, the sound can be decomposed into signals of different frequencies at certain levels. I understand now, the polarity of signal is an extreme case of phase. It is good that people somewhere in Europe are experimenting with this phasing.

Gutbucket, you've mentioned that there is no phase (time shift) in coincident configuration but there can be polarization. Is it caused by the sound that comes at a certain angle at which it hits the front side of the diaphragm of the first mic and the back side of the diaphragm of the second mic?

DATBRAD, thank you for practical guidance. That's good news for me, it is possible to make a recording out of the center, which cannot be easily distinguished from the recording made in the center. I think your practical guidance is consistent with the advice from Gutbucket. To try to find the acoustic center and then to tune it in DAW via level difference and time difference.

Edit
I have last theoretical question. Please correct me, if the following text is wrong.

Stereo shifted by level difference:
If the volume of right channel is turned up (or turn down left channel) the result will be closer to correctly angled mics. But the stereo is corrected only in the circle of PA sector. It is causing amplification of sector outside the PA. Properly angled mic will record outside sector with much lower volume.

Correction of wrong angled mic using the level difference - negatives/ positives:
+ Straighten stereo image in the sector of PA
- Amplifies sector outside the PA in one channel - reflections, room acoustics.
- Stereo of sector outside the PA is shifted to one side
« Last Edit: August 30, 2016, 07:10:27 AM by Kuba E »

Offline kuba e

  • Trade Count: (1)
  • Taperssection Member
  • ***
  • Posts: 489
  • Gender: Male
Re: minimize effect of shifted stereo image
« Reply #12 on: August 30, 2016, 08:04:22 AM »
Stereo correction in DAW by the time difference has negatives/positives :
+ Straighten stereo image in the sector of stage
- One side of sector out PA is shifted more in playback center, second side of sector out PA
is shifted much further in playback side.

I tried to deduce equation for time difference and examples for angled and shifted mics. Please take it with a reserve, it is not verified. I'll be glad if someone corrects it.

Change of time difference has very similar negatives/positives as level difference change.

Conclusion:
Angling mics to acoustic center is very important. Bad stereo image, because mics are pointed out of acoustic center, can be corrected only partly. 

Offline Life In Rewind

  • Trade Count: (0)
  • Taperssection Member
  • ***
  • Posts: 883
    • www.rovingsign.com
Re: minimize effect of shifted stereo image
« Reply #13 on: August 30, 2016, 09:01:48 AM »
I'm fond of using an old Sony ECM-999PR M/S mic when I know Im going to be off center. (This is a big LSD2 sized/style mic) Not sure if the control is continuous - but it has a few preset angles. I use one of the tighter angles (90 deg) and just point the arrow at the loudest thing...rotate as necessary to get some even-ness - and fine tune with levels. Sounds similar to what others are suggesting here.

For some reason - I had the idea that M/S would be more forgiving.

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15683
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: minimize effect of shifted stereo image
« Reply #14 on: August 30, 2016, 12:35:23 PM »
Gutbucket, you've mentioned that there is no phase (time shift) in coincident configuration but there can be polarization. Is it caused by the sound that comes at a certain angle at which it hits the front side of the diaphragm of the first mic and the back side of the diaphragm of the second mic?

Yes, sound arriving at the back side of a microphone which has a pattern directivity greater than cardioid (supercardioid, hypercardioid, or bidirectional) will produce a signal that has inverted polarity compared to the signal produced when sound arrives from the front side.  If a pair of microphones are arranged so as to be truly coincident, there can be no time of arrival differences between the two channels, and therefore no phase differences.  However, there can and will be polarity differences between channels, depending on the angle of arrival of the sound and on which side of each microphone the sound impinges.

Example- Lets use bidirectional microphones as an example, which have a symmetrical figure-of-8 shaped pickup pattern which is equally sensitive to sound arriving from directly in front as it is to sound arriving from directly behind.  The only difference is any sound arriving from the back hemisphere produces a signal with inverted polarity compared to a sound arriving from the front hemisphere. 

If you take two bidirectional microphones and arrange them in a coincident X/Y configuration with a 90 degree angle between them (placed so that they are at right angles to each other and as close together as possible - a setup commonly referred to as "Blumlein", or crossed bidirectionals), the microphone array will be equally sensitive to sound arriving from any direction in the horizontal plane - from the front, back, and sides. 

Sounds arriving at the microphones from within the front quadrant will produce signals which may differ in level between the channels, yet will have the same polarity relationship between them.  Likewise, sounds arriving from anywhere within the rear quadrant will also produce signals which may differ in level between channels, and will also have the same polarity relationship between channels.  The polarity relationship of one channel with respect to the other is the same in both cases, even though the polarity is inverted across both channels together with respect to sounds arriving from the rear compared to sounds arriving from the front.  Since the polarity relationship between channels is the same in each case, the resulting left/right stereo imaging works identically for sound sources coming from the front and those coming from the rear.  But because the polarity is inverted in both channels for sounds arriving from the rear compared to sound arriving from the front, the resulting stereo playback image for sounds from back quadrant is heard as a reverse Right/Left mirror image between speakers, superimposed upon the Left/Right image produced by sounds arriving from the front.

Sounds arriving at the microphones from the side quadrants will produce signals which have inverted polarity between the Left and Right channels.  So sounds arriving from one side are reproduced sort of as if you had pointed an X/Y pair of cardioids in that direction and miswired the speakers so that the polarity was inverted to one of them.  Likewise for sounds arriving from the other side, which are reproduced as if the other speaker channel was wired as if its polarity were inverted.

The phase relationship between the Left and Right channels is always the same, regardless of the direction from which the sound arrives.  The polarity relationships between channels differs with regards to which quadrant the sound arrives, and the signal level relationship between channels differs with regards to the location of the sound within each quadrant.
« Last Edit: February 04, 2020, 12:32:51 PM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

 

RSS | Mobile
Page created in 0.156 seconds with 41 queries.
© 2002-2024 Taperssection.com
Powered by SMF