Become a Site Supporter and Never see Ads again!

Author Topic: SRA discrepancies between different reference tools  (Read 1589 times)

0 Members and 1 Guest are viewing this topic.

Offline voltronic

  • Trade Count: (28)
  • Needs to get out more...
  • *****
  • Posts: 2555
  • Gender: Male
SRA discrepancies between different reference tools
« on: August 21, 2016, 11:31:08 PM »
I've just found major discrepancies for SRA data between Michael Williams' SRA charts and what is shown on the Sengpiel visualization tool, as well as the Neumann Recording Tools app I sometimes use.  The discrepancies seem to be mostly with spaced omnis, though they do show up other places.

For example: According to Williams, omnis at 41 cm equals an SRA of +/- 70°.  In both the Sengpiel and Neumann tools, the SRA is +/- 90° for anything 52 cm and narrower. 
Back to Williams: Just shy of 50 cm spacing is an SRA of +/- 50°, so 52 cm should be a bit narrower than that, and certainly nowhere near +/- 90°.

The Schoeps Image Assistant shows yet another set of SRA results, different from any of the others with AB omnis.  Entering 41 cm omnis on Image Assistant gives us 94° (total angle), nowhere near the 140° total for that distance in Williams.

Finally, Image Assistant agrees with Williams for directional mic arrays (ORTF, NOS, etc.), but the numbers on the Sengpiel and Neumann tools for those are significantly different (by the same amount). 
Example: Both Image Assistant and Williams' chart have ORTF at an SRA of 110° total, whereas Sengpiel and Neumann show 96°.

What's going on here?  If we can trust Michael Williams and his research, then either these other tools are not using the same data in their calculations, or their accuracy may be in question.

I realize that this all may be purely academic if we're letting our ears be our guide, but I'd like to have accurate data to set my ears on the right path.
DPA 4061 | Line Audio CM3 | Naiant X-Q
Naiant PFAs | Shure FP24
Tascam DR-70D JWMod | Sony PCM-M10

Tascam DR-70D FAQ
Team Line Audio
Quote
I am hitting my head against the walls, but the walls are giving way.    ///    If a composer could say what he had to say in words he would not bother trying to say it in music.
- Gustav Mahler

Offline rocksuitcase

  • Trade Count: (1)
  • Needs to get out more...
  • *****
  • Posts: 4931
  • Gender: Male
    • RockSuitcase: stage photography
Re: SRA discrepancies between different reference tools
« Reply #1 on: August 22, 2016, 10:47:31 AM »
.
Volt, I have no input here as I don't use the tools, I follow gutbucket's explanations of the Michael Williams charts, but can understand why other charts would be different if they are based upon differing inputs.
So I'm following this thread waiting to hear what others tell us.
music IS love

When you get confused, listen to the music play!

Mics:         AKG460|CK61|CK1|CK3|CK8|Beyer M 201E
Recorders:Marantz PMD661 OADE Concert mod; Tascam DR680 MKI

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #2 on: August 22, 2016, 11:44:37 AM »
These visualization applications mathematically calculate and graph their output information, but the data sets used for the calculations were empirically derived through various listening tests. Basically, to derive the listening curve data, test subjects are placed in front of a stereo playback system and instructed to- "Close your eyes and point to the apparent location of the sound source", for an number of sound stimuli which have attributes that correspond to various microphone setups.  These data sets concerning apparent stereophonic image localization are somewhat "soft" in that they represent averages of human psychoacoustic listening traits. Different tests product somewhat different results, depending on the test setup, the test stimulus, and various other experimental variables.  In particular, using different types of test stimulus such as clicks verses tones, verses speech or music, produces different results even if all the other experimental variables are held constant.  So different visualizations produce somewhat different results depending upon which data set(s) they base their calculations.

Add to that the more diffuse nature of the stereo imaging produced by time difference stereo verses level difference stereo, which will tend to increase variance in localization predictions when specifying omnis.  As far as I'm aware, the location curves for omnidirectional microphones used by all of the visualizers use omni curves are 100% time of arrival based.  They neither take into account a level difference between channels for a source close enough to one microphone of the two to generate such, nor the increasing directionality of omnis at high frequencies for microphones of significant size.

Further blurring 'precision accuracy' when various curves are compared and/or combined in aggregate is the general tendency for them to agree towards the middle of their ranges while showing increased variability and dissimilarity at the outer edges of the range.  Thus the 75% SRA figures seen in Williams and often quoted in visualization applications, within which the different data sets are more in agreeance and generally provide more reliable predictions of the auditory localisation experienced by an average listener.

Some of the various location curves derived by different researchers are compared in these documents hosted at Herr Senpiel's site-
http://www.sengpielaudio.com/InterchannelLevelDifferenceTimeDifference1.pdf
http://www.sengpielaudio.com/InterchannelLevelDifferenceTimeDifference2.pdf
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #3 on: August 22, 2016, 12:44:16 PM »
Try it yourselves!  We can do our own listening tests and reach our own conclusions, based on our specific gear, playback system, ears and brains. No data beats your own, generated with your own equipment using your own mics, setup, playback and ears. It doesn't have to be anything fancy, and I've learned so much myself from doing this.

If have the time to do so and as long as it's not going to freak out onlookers too much, I'll frequently do a quick "walk around test" recording of my recording setup, before or after making the intended recording, but as setup in that location.  It's the same thing Alan Blumlein did in test recordings made with his stereo recording and playback systems in the 1930s- Walk all the way around the microphone setup in a circle while calling out the numbers of the clock, starting from directly in front and going clockwise all the way around until reaching the front again.  For good consistency, walk a big enough circle so that you aren't overly close to the mics (keep something like at least 10 feet away or more if possible), try to maintain an equal distance from the mics all the way around, speak directly towards the mics at each position, keep the clock-geometry announcements reasonably accurate (9:00 @ 90 degrees lift, not closer to 8:00 or 10:00) and try to keep the timing between positions and announcements even as you work your way around.  To provide a secondary stimulus signal significantly different than my voice, loud and clear enough to make it easy to hear on the recording even in a noisy post-concert environment, I keep a dog clicker in the recording bag and click it just before calling out the number of each position.

As a side bonus, this quick odd-looking 30 second or so walk around announcement test sometimes provides great entertainment to bewildered onlookers, as a number of members here I've taped with can relate.

Once home, listening to that short test recording provides loads of useful feedback about the behavior of the microphone array, as well as about the particularities of the recording environment.  Listen for the apparent playback location as the numbers are called out from various positions around the array. Listen to how the imaging works as one moves around the circle from point to point. Listen for differences in sharp versus diffuse image positioning for the various locations around the mic array.  Listen for how even or how stretched/squashed the apparent distance is between each position is across the soundstage.  In addition to the source localization and imaging aspects, listen for differences in pickup sensitivity as the sound source is moved around the mic array, as well as differences in clarity, timbre, apparent source distance, reverberance, etc.  Listen on speakers and on headphones and compare the two.  Compare various mic setups and various combinations of mics in multi-microphone arrays.  It's great taper geek fun, both entertaining and highly enlightening.

I've never plotted out my own listening data on a graph or anything like that, but doing this had provided a fantastic real-world sense of how all this stuff works in terms of what I actually hear. I started doing this for my own edification when I was recording using typical two channel setups years ago, and it became even more valuable to me when I moved to multi-microphone 2-channel stereo and multichannel surround recording arrays.
« Last Edit: August 23, 2016, 09:41:55 AM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #4 on: August 22, 2016, 01:26:17 PM »
^
This has also been very useful for me in determining what actually matters and what doesn't so much.  For instance in my surround recording setups, it's easy to critically assess the image location accuracy or the lack of it for each position around the clock. But for music recording and reproductions, does it really matter how accurate the pin-point location accuracy is for locations off to the sides and behind the listener? For some quadrants it's far more important to achieve an open and enveloping sense of space than location accuracy (like for the audience and room verb stuff coming from the sides and the rear), whereas for the front quadrant a more positionally accurate 'imaging window', within which source locations are reproduced in a relatively undistorted way, is more appropriate.

This kind of real world listening feedback has in informed a lot of my thinking over the years about what really matters, and in turn has informed each stage of development of my oddball microphone setups.  It's why I think more in terms of direct/reverberant sound components and typically rank the front/back relationship more important than left/right in terms of stereophonics.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline voltronic

  • Trade Count: (28)
  • Needs to get out more...
  • *****
  • Posts: 2555
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #5 on: August 22, 2016, 01:28:37 PM »
These visualization applications mathematically calculate and graph their output information, but the data sets used for the calculations were empirically derived through various listening tests. Basically, to derive the listening curve data, test subjects are placed in front of a stereo playback system and instructed to- "Close your eyes and point to the apparent location of the sound source", for an number of sound stimuli which have attributes that correspond to various microphone setups.  These data sets concerning apparent stereophonic image localization are somewhat "soft" in that they represent averages of human psychoacoustic listening traits. Different tests product somewhat different results, depending on the test setup, the test stimulus, and various other experimental variables.  In particular, using different types of test stimulus such as clicks verses tones, verses speech or music, produces different results even if all the other experimental variables are held constant.  So different visualizations produce somewhat different results depending upon which data set(s) they base their calculations.

Add to that the more diffuse nature of the stereo imaging produced by time difference stereo verses level difference stereo, which will tend to increase variance in localization predictions when specifying omnis.  As far as I'm aware, the location curves for omnidirectional microphones used by all of the visualizers use omni curves are 100% time of arrival based.  They neither take into account a level difference between channels for a source close enough to one microphone of the two to generate such, nor the increasing directionality of omnis at high frequencies for microphones of significant size.

Further blurring 'precision accuracy' when various curves are compared and/or combined in aggregate is the general tendency for them to agree towards the middle of their ranges while showing increased variability and dissimilarity at the outer edges of the range.  Thus the 75% SRA figures seen in Williams and often quoted in visualization applications, within which the different data sets are more in agreeance and generally provide more reliable predictions of the auditory localisation experienced by an average listener.

Some of the various location curves derived by different researchers are compared in these documents hosted at Herr Senpiel's site-
http://www.sengpielaudio.com/InterchannelLevelDifferenceTimeDifference1.pdf
http://www.sengpielaudio.com/InterchannelLevelDifferenceTimeDifference2.pdf

Thanks for weighing in.  Didn't Williams do similar listening tests and measurements to arrive at his numbers?  Are you saying that both of them did similar tests but with different results, and that my Neumann app is likely using Sengpiel's data?  And then did Helmut Wittek conduct his own unique tests as well to develop the data for Image Assistant?  It's curious that his app matches Williams so closely for directional arrays (across the full range of measurements, not just in the middle) but is very different for omnis.

Thinking of such a listening test with the more "fuzzy" spaced omni localization does seem to make sense that there would be more variation between studies for that kind of array.

I understand what you mean with the "75% SRA figures seen in Williams", but I can't find anything in Williams' papers that states his SRA graphs are for 75% SRA rather than 100%.  In fact, the new version of the Image Assistant shows both 75% and 100% figures, and it's the 100% figures that match Williams (for directional arrays, not for omnis as I said before).  I'm not sure if that's conclusive of anything or not.

Personally I don't feel comfortable using the Sengpiel or Neumann tools for omnis anymore, being that they are stuck at +/- 90° SRA below 52 cm and I'm using narrower spacings than that lately for up-close / large-ensemble recording. 

As for the rest of it, I suppose it comes down to different tests generating different data, and that we should be using the data from these as a guide only.
DPA 4061 | Line Audio CM3 | Naiant X-Q
Naiant PFAs | Shure FP24
Tascam DR-70D JWMod | Sony PCM-M10

Tascam DR-70D FAQ
Team Line Audio
Quote
I am hitting my head against the walls, but the walls are giving way.    ///    If a composer could say what he had to say in words he would not bother trying to say it in music.
- Gustav Mahler

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #6 on: August 23, 2016, 01:35:25 PM »
Yeah, it's a guide rather than a rule, based on fuzzy perceptual based empirical data.  None of this is absolute, yet it's quite useful as a guide within it's appropriate domain.  Always keep in mind that all this SRA stuff only concerns apparent source imaging location.  It says nothing about all the other important stereophonic attributes which are often more important than imaging accuracy, especially with spaced omnis where imaging accuracy takes a backseat to those other important stereophonic attributes.   In general it applies more clearly and straightforwardly to directional mics than to omnis.

The 100% SRA data is are what all these calculations are based upon.  The 75% SRA thing isn't Williams but Theile's idea I believe.  It's simply the range in which agreement between different curves is somewhat less fuzzy, where comparisons between alternate configurations which nominally achieve similar imaging qualities are somewhat more valid.  Most of the angular distortion manifests in the outer 25% of the SRA for many of the curves.

Yet similarly, Williams says the following in one of his papers about combining two frequency independent mic arrays for 2-channel stereo-
Quote
In this kind of configuration, it is not possible to obtain angular coincidence between the two sound images throughout the stereophonic recording angle, It is therefore necessary to compromise and consider the similarity of Angular' Distortion only in the centre segment occupying 66% of the SRA. The outer 33% will be impossible to match between the equivalent pairs.

I'm not sure what data the Neumann Stereo Tools app uses.  The PDFs I linked above show the difference in the curves that Williams and Sengpiel used at least-

Quote
Red curve: Master's Thesis by Gert Simonsen at the Technical University of Denmark Lyngby (1984), also
used by Michael Williams. Test signals: maracas and claves clicks..


Blue curve: Eberhard Sengpiel and tonmeister students (1992) used as test signals monaural human speech
and a Beethoven string quartet in a usual control room with Bowers & Wilkens 801 matrix 3 loudspeakers.

William's curves are based upon data from G.Simonsen's 1984 master thesis, using percussive clicks reproduced over speakers separated by a 60 degree angle.  Time and level differences were introduced between channels and test subjects were asked to identify the apparent source location of the reproduced clicks. 

Eberhard Sengpiel's visualizer is based on his data collected in 1992, using human speech and Beethoven string quartet music.
Quote
In 1992, these values of 25 test subjects (students) in the Teldec control room (B & W Matrix™ 801 Series 3) were determined with test signals of speech, and a Beethoven string quartet, so without the usual higher-frequency unmusical clicks.

Sengpiel's localisation curves are defined by the following empirical values in combination with the idealized polar sensitivities for each microphone directivity pattern:

Direction of phantom image    10 %       25 %       50 %       75 %      100 %
Level difference Δ L                  0 dB       3 dB       6.5 dB      11 dB      18 dB
 
Direction of phantom image    20 %        25 %       50 %       75 %      100 %
Time difference  Δ t                 0 ms      0.23 ms   0.48 ms   0.81 ms   1.5 ms
« Last Edit: August 23, 2016, 01:36:58 PM by Gutbucket »
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #7 on: August 23, 2016, 01:45:03 PM »
Personally I don't feel comfortable using the Sengpiel or Neumann tools for omnis anymore, being that they are stuck at +/- 90° SRA below 52 cm and I'm using narrower spacings than that lately for up-close / large-ensemble recording.

Omni spacing is sort of it's own world.  It's a lot more subjective, and where these tools fall short, with so much other stuff going on which is more audibly apparent than the SRA imaging qualities.  I don't find these tools especially useful as a guide when it comes to spacing omnis.  They can be informative for some things involving omnis, but are in no way the last word.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline voltronic

  • Trade Count: (28)
  • Needs to get out more...
  • *****
  • Posts: 2555
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #8 on: August 23, 2016, 03:31:09 PM »
Thanks for that additional information.  I missed the caption on the red curves in the first PDF that says it was used by Williams.  The fact that those curves are so different than those from the other tests explains quite a bit.

Yet similarly, Williams says the following in one of his papers about combining two frequency independent mic arrays for 2-channel stereo-
Quote
In this kind of configuration, it is not possible to obtain angular coincidence between the two sound images throughout the stereophonic recording angle, It is therefore necessary to compromise and consider the similarity of Angular' Distortion only in the centre segment occupying 66% of the SRA. The outer 33% will be impossible to match between the equivalent pairs.

Quotes like that reinforce the need for lots of trial and error and experience when you've got 4 or more mics in play, such as your "oddball" techniques, or things like the "Faulkner II" omni-subcard array.

Spaced omnis are still something I'm trying to fully wrap my head around, especially as I get better results with narrow spacings.  A year ago I would have thought something like 40 cm would have very poor imaging, but that's not the case.
DPA 4061 | Line Audio CM3 | Naiant X-Q
Naiant PFAs | Shure FP24
Tascam DR-70D JWMod | Sony PCM-M10

Tascam DR-70D FAQ
Team Line Audio
Quote
I am hitting my head against the walls, but the walls are giving way.    ///    If a composer could say what he had to say in words he would not bother trying to say it in music.
- Gustav Mahler

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #9 on: August 23, 2016, 07:02:15 PM »
Spaced omnis are still something I'm trying to fully wrap my head around, especially as I get better results with narrow spacings.  A year ago I would have thought something like 40 cm would have very poor imaging, but that's not the case.

So much has to do with distance from the source, a large part of what makes live music recording from out in the audience so unusual.  In terms of imaging, 40cm between omnis isn't much from out in the hall, but is plenty from relatively close onstage.  Yet in terms of diffuse field correlation (DFC) which strongly affects the ambient sense and quality of the reverberant pickup, the spacing relationship is the same whether the mics are placed well back in the hall or relatively close onstage.  That's getting back to the 'team effort' thing of using more than two mics, arranged in such ways as to un-link imaging and DFC and thus remove some of the compromise inherent in being restricted to using a single pair of mics for those often conflicting requirements.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (13)
  • Needs to get out more...
  • *****
  • Posts: 12409
  • Gender: Male
Re: SRA discrepancies between different reference tools
« Reply #10 on: August 30, 2016, 04:25:23 PM »
Found the statement by D.Satz quoted below, which applies here with regards to the 'fuzziness' of the datasets on which these microphone configuration tools are based, in the recent minimize effect of shifted stereo image thread-

Kuba, congratulations on what you're doing. I hope it continues to be interesting and productive for you.

-- Your questions about equivalences between time differences (delay) and level differences can be answered clearly only for abstract, simplified cases such as a single pure test tone at some frequency such as 400 Hz. With actual program material, the correspondence is much harder to estimate because it depends so much on the content of the material. For example, a close-miked recording of someone playing cymbals (with very strong upper-midrange content and lots of transient energy) would be affected more obviously by timing and level shifts than a semi-distant recording of a men's chorus singing a series of sustained "oooh" and "aaah" sounds in a reverberant space. So, while there is at least one "rule of thumb"-type formula that lets you convert time shift into a roughly equivalent level shift and vice versa, it is only approximate.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<<

 

RSS | Mobile
Page created in 0.248 seconds with 38 queries.
© 2002-2018 Taperssection.com
Powered by SMF