Become a Site Supporter and Never see Ads again!

Author Topic: Acoustics of the Shoebox  (Read 6677 times)

0 Members and 1 Guest are viewing this topic.

Offline bombdiggity

  • Trade Count: (11)
  • Needs to get out more...
  • *****
  • Posts: 2277
Acoustics of the Shoebox
« on: December 01, 2017, 05:40:51 PM »
Not quite sure what general subject thread this fits but it may be closest here. 

There's an interesting study how room design affects the audient's experience summarized here (and apparently published in the Proceedings of the National Academy of Sciences in 2014):

https://www.concertgebouw.nl/uw-bezoek/gebouw-geschiedenis/beroemde-akoestiek/nrc-handelsblad-akoestiek-van-de-schoenendoos

Google translate seemed to do a pretty good job with it. 

The impact of the reverb from the shape of the room on differing segments of the frequency range in conjunction with the type of music lays out (and in their study measures) a number of the factors that interact before our mics even enter the picture. 

Gear:
Audio:
Schoeps MK4V
Nak CM-100/CM-300 w/ CP-1's or CP-4's
SP-CMC-25
>
Oade C mod R-44  OR
Tinybox > Sony PCM-M10 (formerly Roland R-05) 
Video: Varied, with various outboard mics depending on the situation

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #1 on: December 04, 2017, 04:06:05 PM »
A coffered ceiling and niched sidewalls are important detail elements as well, apparently.  David Griesinger is a expert on concert hall acoustics as well as a recording engineer, and always a good read on these topics- www.davidgriesinger.com/

Quote
The six Finns measured the famous acoustics of the Concertgebouw. [snip] The more than thirty loudspeakers on the stage together formed a 'loudspeaker orchestra'. Mozart, Beethoven, Bruckner came from the speakers, but without an orchestra being involved.

No Sibelius?
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline wforwumbo

  • Trade Count: (11)
  • Taperssection Regular
  • **
  • Posts: 186
Re: Acoustics of the Shoebox
« Reply #2 on: December 05, 2017, 12:04:49 PM »
Cool to see David Griesinger cited here - David is a good friend of mine, and I've had the pleasure of interning for him. Just saw him a few weeks ago, he's been working on a cool new mechanism for playback of binaural stimuli.

Regarding the purpose of this post... I actually have a plan. Given my degree is in architectural acoustics and I've both taken and TA'd multiple semesters in the topic, I might try to put together some notes on room acoustics for tapers, and contributing it to the knowledge base. That is, if people are interested in reading about it and want to learn more about the topic.
North Jersey native, Upstate veteran, proud Texan

2x Schoeps mk2; 2x Schoeps mk21; 2x Schoeps mk4

4x Schoeps cmc5; 4x Schoeps KC5; Nbob KCY; Naiant PFA

EAA PSP-2

Sound Devices Mixpre-6

Offline bombdiggity

  • Trade Count: (11)
  • Needs to get out more...
  • *****
  • Posts: 2277
Re: Acoustics of the Shoebox
« Reply #3 on: December 05, 2017, 01:00:29 PM »
^ That would be fun and probably useful (particularly to the extent we can choose our location/s at any given event).  The audience may be a subset even here but there are a few of us... 

The David Griesinger information is excellent.  I liked this brief piece: https://www.classical-scene.com/2011/10/11/9243/  I seem to have stumbled into this aspect myself: You cannot use any type of first-order microphone without putting the microphones much closer to the music than would be ideal for concert listening. This increases the direct to reverberant ratio and widens the image enough that the recording begins to have the clarity of natural hearing.  It is highly unlikely I'll be in settings where running multiple mics is feasible and I'm not likely to want to do that anyway, so I tend to focus on how to get the most from a pair within the myriad limitations I have to deal with.  I've learned how to pretty consistently get what I like, which might not be everyone's exact cup of tea, but I feel satisfied that I accomplished my goal. 

I am growing more curious about how to get the coloring useful reverberance might provide but too often there is too much rather than too little of that. 
Gear:
Audio:
Schoeps MK4V
Nak CM-100/CM-300 w/ CP-1's or CP-4's
SP-CMC-25
>
Oade C mod R-44  OR
Tinybox > Sony PCM-M10 (formerly Roland R-05) 
Video: Varied, with various outboard mics depending on the situation

Offline wforwumbo

  • Trade Count: (11)
  • Taperssection Regular
  • **
  • Posts: 186
Re: Acoustics of the Shoebox
« Reply #4 on: December 05, 2017, 01:49:16 PM »
^ That would be fun and probably useful (particularly to the extent we can choose our location/s at any given event).  The audience may be a subset even here but there are a few of us... 

The David Griesinger information is excellent.  I liked this brief piece: https://www.classical-scene.com/2011/10/11/9243/  I seem to have stumbled into this aspect myself: You cannot use any type of first-order microphone without putting the microphones much closer to the music than would be ideal for concert listening. This increases the direct to reverberant ratio and widens the image enough that the recording begins to have the clarity of natural hearing.  It is highly unlikely I'll be in settings where running multiple mics is feasible and I'm not likely to want to do that anyway, so I tend to focus on how to get the most from a pair within the myriad limitations I have to deal with.  I've learned how to pretty consistently get what I like, which might not be everyone's exact cup of tea, but I feel satisfied that I accomplished my goal. 

I am growing more curious about how to get the coloring useful reverberance might provide but too often there is too much rather than too little of that.

Yup, happy to contribute what I can. I'll LaTeX all my notes into PDFs, and gear the discussion and topics (with sources) to specifically gear it towards live tapers, and specifically those without a background in differential equations.

Something you - and everyone here - should know about David, is that he really ONLY thinks about recording music in the sense of classical and opera. So for one thing, instrumentation is obviously different. For another, it is frequently unamplified (and when it is, the amplification definitely isn't at the volume or speaker array placement of most rock PAs). And for a third - possibly most importantly - the audience is almost always pindrop silent in a classical/opera performance, which obviously isn't the case at, say, a rock gig. For David's work recording classical music, his way of thinking makes a lot of sense. He wants a HIGH degree of accuracy in spatial information presented in a stereo field, and he thinks about room reflections as a way to enhance that. Not to mention, for stuff like classical music the thresholds of binaural perception (for example, binaural sluggishness) are radically different....

When you put a rock band with live PA reinforcement in any venue (ie the type of music that tends to be discussed around here), a room reacts VERY differently. I've had some discussions with David about the types of shows I attend (which are usually modern ambient/electroacoustic, and Phish), and he's admitted it's an entirely different ballgame, with a different set of variables that change the way of thinking about capture and playback. Not that David's points are invalidated particularly for our discussion here and the ones I tend to have with him, but take lots of what he's talking about with a grain of salt when thinking about your own tapes.

There is another point I want to mention here too, which is brief but more academic than it is practical: David's usage of the words "Clarity" and "Presence" are not the commonly-agreed-upon definitions. Rather, he has re-defined them to fit more conveniently with his own thoughts of recording and listening. It's not worth getting into the nitty-gritty details of the distinctions, but just know that he has his own definitions for those terms.

What IS worth thinking about, as far as David's theories and applying them to the type of music we tend to tape, is the TIMING of early reflections. note: what follows is my opinion based on experience, knowledge of theory, and personal testing; it is by no means law, and it certainly isn't necessarily rigorous, rather it's my beliefs on the matter. The time at which an early reflection arrives is sometimes more important than its strength. If early reflections arrive too late, then we perceive them as separable auditory events rather than as directional cues for the volume of a room or distance from side walls. This is what we commonly refer to in live performance as a slapback echo; it can sound disorienting and a bit confusing when you're trying to get a sense of the balance of a live performance. Diffuse reflections, as a corollary, if in too great of a strength will blur the stereo image, make things sound distant, and cause instruments to start becoming one giant smudge rather than a balanced stereo image.

All three components of an impulse response - direct sound, early reflections, and late reverberation - work closely together in concert (no pun intended) to produce the resulting sound field perceived by a listener. This is more or less the foundation of David's work - trying to get a good balance amongst all three to optimize the playback experience. Put another way, he's trying to understand the entire system from origin of sound propagation (as well as its potential contents) all the way to neural firing, and he's trying to reverse-engineer all the pieces of the puzzle to get it to all fit together in a manner he can control. Hell, it's why he created random hall and the Lexicon 224 (he's shown me pictures of him literally hand-building the RAM, creating his own compiler/assembler for TI DSPs, and his custom command line/controls for creating complex DSP code from when he designed the 224). I'm not gonna give away any of his DEEP secrets, but I'll happily discuss whatever he shares openly within the academic community.

I know I ranted a bit non-linearly here, and some of the topics I mentioned and how I spoke about them might be a bit hand-wave-y (I'm currently on a massive 4-day coding binge with about 9 hours of sleep in that time span). If you want to know more about anything I've just mentioned, please let me know and I'll happily explain in further detail and at a lower level.
North Jersey native, Upstate veteran, proud Texan

2x Schoeps mk2; 2x Schoeps mk21; 2x Schoeps mk4

4x Schoeps cmc5; 4x Schoeps KC5; Nbob KCY; Naiant PFA

EAA PSP-2

Sound Devices Mixpre-6

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #5 on: December 05, 2017, 02:10:32 PM »
I've been waiting for you to join TS for the past 10 years.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline rocksuitcase

  • Trade Count: (4)
  • Needs to get out more...
  • *****
  • Posts: 8281
  • Gender: Male
    • RockSuitcase: stage photography
Re: Acoustics of the Shoebox
« Reply #6 on: December 05, 2017, 03:20:51 PM »
Cool to see David Griesinger cited here - David is a good friend of mine, and I've had the pleasure of interning for him. Just saw him a few weeks ago, he's been working on a cool new mechanism for playback of binaural stimuli.

Regarding the purpose of this post... I actually have a plan. Given my degree is in architectural acoustics and I've both taken and TA'd multiple semesters in the topic, I might try to put together some notes on room acoustics for tapers, and contributing it to the knowledge base. That is, if people are interested in reading about it and want to learn more about the topic.
Great idea and thanks for adding what you just did below. Griesinger's work on binaural perception was influential in our research and patent applications for our room acoustics and playback hardware studies at Syracuse University in the early to mid 1980's. (lots of Hafler, Bose, Polk and early Altec research referenced as well)

I have forgotten much more acoustical engineering than I still retain, and the math for me is a bit difficult to explain these days, which is why I applaud your entry into our community and thank you for adding what you can and have time to add. While I am aware that you are speaking on a technical level a bit above most of us, this is a great way for those of us who think about sound in these terms to discuss it. Please continue to add to these discussions- it is excellent information.

For physically altering the timing of early reflections, when designing small to medium studio spaces, or performance spaces, a product which is an interchangeable GoBo or a flippable triangular GoBo installed into the side walls is what Vincent van Haaff found useful to essentially allow the end user to change the room acoustics based on type of audio, number of musicians, type of instruments etc.
https://vintageking.com/blog/2016/05/vincent-van-haaff/            Note the fixture I have circled in the below photo- this is a changeable, three sided GoBo with each side of the triangle built of a different type of reflecting or absorbing material(s)-i.e. absorptive material of a specific absorption co-efficient on 1 side, wooden slats of differing widths on one side, and both slats and absorbers on the third side. We applied for a patent of this style of design with him and the inventor, Gordon Merrick, but it didn't get approved (probably many similar design ideas, or prior art, before our application). I looked for a better photo, but can't locate; will update when I can

I note Griesinger's recent work on binaural reproduction and headphone equalization. Very interesting and current to the way many people listen. He is using what he terms "his avatar" - a fully anthropromorphic copy of his pinna, ear canals, and eardrums then individual users create their own headphone equalization via an app which utilizes equal loudness measurement techniques.
http://www.davidgriesinger.com/poster.jpg
 
jeez, sorry to veer a bit OT.
music IS love

When you get confused, listen to the music play!

Mics:         AKG460|CK61|CK1|CK3|CK8|Beyer M 201E|DPA 4060 SK
Recorders:Marantz PMD661 OADE Concert mod; Tascam DR680 MKI x2; Sony PCM-M10

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #7 on: December 05, 2017, 04:36:16 PM »
When you put a rock band with live PA reinforcement in any venue (ie the type of music that tends to be discussed around here), a room reacts VERY differently. I've had some discussions with David about the types of shows I attend (which are usually modern ambient/electroacoustic, and Phish), and he's admitted it's an entirely different ballgame, with a different set of variables that change the way of thinking about capture and playback. Not that David's points are invalidated particularly for our discussion here and the ones I tend to have with him, but take lots of what he's talking about with a grain of salt when thinking about your own tapes.

So true. 

Most of the audience-perspective live music recording done around here is very much a niche-recording endeavor compared to most other forms of recording. It is unique in many ways, not very closely related to studio recording, yet also not like traditional minimalist classical recording even though the classical model is mostly what informs the microphone techniques used.  This applies to not only the recording part of the equation (recording position and microphone techniques), but also the post-production part of things (were the mixing and finishing techniques of these recordings are quite different than mixing and mastering of eithter studio recorded or classical material).

Its very interesting and informative to hear how a room behaves given different types of performance.  I've learned so much from carefully paying attention to acoustics while listening to both the live performance and recordings I've been able to make in a hall intended for medium-scale classical music when used for large-scale symphonic performance verses chamber or solo performance, versus back-line only amplified jazz combos, verses full PA amplified rock pop acts.  Inversely it's been likewise ear-opening listen with a similarly critical ear towards the room acoustics given a single chamber outfit performing in different spaces- a classical hall verses a reverberant church verses and a smallish room with a relatively low acoustical-tile suspended ceiling.  And it's really cool to listen for and identify some of the acoustic aspects Greisinger continues to explore an quantify when contrasting different audience perspectives within those spaces.  For me it takes both the knowledge of what guys like David are quantifying as well as careful listening over time for those aspects in actual venues to really connect the dots in a useful way and apply them to what we're doing here.

Variable acoustics of the space itself is a super interesting angle, both via architectural and electronic means.  Although I've not be able to specifically identify the specific auditory changes from it, I've marveled at the variable acoustics system of the largest contemporary performance hall here which uses huge variable-geometry walls and vast open space between the interior acoustic shell and the outer walls of the building.  On the electronic side, there is an interesting outdoor space here which uses a massive LARES-like system (not sure which) to create a variable virtual acoustic space in an open outdoor park environment.

Quote
What IS worth thinking about, as far as David's theories and applying them to the type of music we tend to tape, is the TIMING of early reflections. note: what follows is my opinion based on experience, knowledge of theory, and personal testing; it is by no means law, and it certainly isn't necessarily rigorous, rather it's my beliefs on the matter. The time at which an early reflection arrives is sometimes more important than its strength. If early reflections arrive too late, then we perceive them as separable auditory events rather than as directional cues for the volume of a room or distance from side walls. This is what we commonly refer to in live performance as a slapback echo; it can sound disorienting and a bit confusing when you're trying to get a sense of the balance of a live performance. Diffuse reflections, as a corollary, if in too great of a strength will blur the stereo image, make things sound distant, and cause instruments to start becoming one giant smudge rather than a balanced stereo image.

All three components of an impulse response - direct sound, early reflections, and late reverberation - work closely together in concert (no pun intended) to produce the resulting sound field perceived by a listener. This is more or less the foundation of David's work - trying to get a good balance amongst all three to optimize the playback experience. Put another way, he's trying to understand the entire system from origin of sound propagation (as well as its potential contents) all the way to neural firing, and he's trying to reverse-engineer all the pieces of the puzzle to get it to all fit together in a manner he can control.

I credit Gresinger's ideas (along with Thiel, Williams, Wittek, Siegpiel, and a few others) as being a primary influence on the development of my own multichannel oddball microphone techniques (OMT) over the past decade, and I'm really happy that rocksuitcase and a few others here have started playing around with them as well.  One of the most important takeaways for me is the critical importance of achieving an optimal balance between direct sound, early reflections, and late reverb - and figuring out ways to try and manage that balance as much as possible given the unique constraints under which we are recording, often via unconventional multiple microphone arrays.  The fundamentals of all that forms a basis upon which I make a lot of unconventional and perhaps seemingly strange suggestions around here.  It will be fantastic to bounce some of those ideas off someone like yourself with a better understanding of the academic and mathematical side of things.  Very cool that you have a personal relationship with David who is such an authority on perceptual acoustics and something of a personal hero of mine in this field. 

Quote
I'm not gonna give away any of his DEEP secrets, but I'll happily discuss whatever he shares openly within the academic community.

Okay then, I've a request!  After he mentioned he'd make it available to interested parties, I sent a few emails to David maybe a year or so ago requesting a copy of his Windows app for frontal speaker equal loudness headphone equalization (referred to in the poster rocksuitcase linked above), yet didn't receive a response so I just let it drop.  I've thought of rigging up a test scenario myself to do the same, but have never gotten around to it.  Any chance you may be able to make that happen?  I think as tapers we can do a great deal to "close the loop" by properly equalizing our headphones to adapt to our own HRTF responses, and I'd think at least some here in the taper community who are more technically oriented would be more open to that approach than others.   We have sort of a unique position as recordists to compare the live experience with recorded event, "making the sausage" ourselves.

I've been half-wondering if any tapers will make their own ear-canal probes, but that's more of a leap!
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #8 on: December 05, 2017, 05:17:43 PM »
The David Griesinger information is excellent.  I liked this brief piece: https://www.classical-scene.com/2011/10/11/9243/  I seem to have stumbled into this aspect myself: You cannot use any type of first-order microphone without putting the microphones much closer to the music than would be ideal for concert listening. This increases the direct to reverberant ratio and widens the image enough that the recording begins to have the clarity of natural hearing.  It is highly unlikely I'll be in settings where running multiple mics is feasible and I'm not likely to want to do that anyway, so I tend to focus on how to get the most from a pair within the myriad limitations I have to deal with.  I've learned how to pretty consistently get what I like, which might not be everyone's exact cup of tea, but I feel satisfied that I accomplished my goal. 

I am growing more curious about how to get the coloring useful reverberance might provide but too often there is too much rather than too little of that.

The presence of the human ear-brain in the perception chain does absolutely amazing things.  Compare the auditory perspective of the typical main microphone position over and just behind the conductor's head verses what is heard from the commonly prefered 5th row human listener perspective, versus what the inverse of the two would sound like.  Still there are things we can do to extend "reach" in a perceptual sense both on the recording and the playback sides of things if we discard the two-channel bottleneck and reserve binaural equivalency for the final link in the chain, but that's a whole 'nother can of worms.

[quoting from the article you linked] I gave up using a soundfield microphone or a dummy head as a main microphone because you cannot derive a discrete center channel from them. I rely largely on Schoeps super cardioid microphones to capture as much direct sound from a section as possible with as little leakage from other sections and the fewest early reflections.  I don’t think I can convince anyone about the virtues of this type of technique, but (nearly) every commercially successful engineer of classical (or pop) music does pretty much the same thing. They would gladly do something else if it worked better.

I started out figuring how to record and reproduce a convincing ambient perspective in comparison with the live event, as that was something I found essential and very much lacking.  Over time I've moved to a a center supercardioid pointed directly at the source as an attempt to exclude everything other than the direct sound as much as possible (as much as possible being the key phrase here), in combination with other mics intended to pickup early reflections, and others intended to isolate the late reverberant component from those other things as much as possible.  In that way, four to six mics provides a limited but very useful ability to balance things which a single stereo pair can only achieve via perfect placement via careful listening, and can actually exceed that in a sort of super-stereo sense - the auditory equivalent of a zoom lens.  Not for everyone, but doable within the limitations we are saddled with.

I have huge respect for excellent 2-channel stereo recordings - they are a certain form of art, pure in their own way - and I certainly don't mean to belittle that approach in any way.  It's just much harder to do really well do to the constraints, IMHO.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #9 on: December 05, 2017, 05:43:20 PM »
Live music is so 3 dimensional.

Part of what makes live music recording different than studio recording is that at it's best it is strongly dependant on conveying location acoustics in a believable way.  I mean the listeners perception of the "performance within a space", rather than simply accommodating for the effects of the local acoustics on any recording.  Although outside the interest of most around here, in that way we are constricted by the playback side of things as much as we are by the recording side of things.  For a few years I had another quote from the same article above which bombdiggity posted a link to in my signature line here at TS- Two channel stereo is limiting. It is very helpful to reproduce the sound with more than two loudspeakers. Three speakers give you twice the localization accuracy as two if you fully utilize the center, and the sweet spot is also greatly broadened. With careful multi-miking and a good surround setup an exceedingly musical mix can be made.

I feared I was going off topic, but this is directly related to conveying live acoustics convincingly.


musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline wforwumbo

  • Trade Count: (11)
  • Taperssection Regular
  • **
  • Posts: 186
Re: Acoustics of the Shoebox
« Reply #10 on: December 05, 2017, 07:11:49 PM »
Ah, beautiful. Lots of good (if not a touch off-topic, though it IS all an important factor to the topic at hand!) info is popping up in this thread.

For physically altering the timing of early reflections, when designing small to medium studio spaces, or performance spaces, a product which is an interchangeable GoBo or a flippable triangular GoBo installed into the side walls is what Vincent van Haaff found useful to essentially allow the end user to change the room acoustics based on type of audio, number of musicians, type of instruments etc.
https://vintageking.com/blog/2016/05/vincent-van-haaff/            Note the fixture I have circled in the below photo- this is a changeable, three sided GoBo with each side of the triangle built of a different type of reflecting or absorbing material(s)-i.e. absorptive material of a specific absorption co-efficient on 1 side, wooden slats of differing widths on one side, and both slats and absorbers on the third side. We applied for a patent of this style of design with him and the inventor, Gordon Merrick, but it didn't get approved (probably many similar design ideas, or prior art, before our application). I looked for a better photo, but can't locate; will update when I can

I note Griesinger's recent work on binaural reproduction and headphone equalization. Very interesting and current to the way many people listen. He is using what he terms "his avatar" - a fully anthropromorphic copy of his pinna, ear canals, and eardrums then individual users create their own headphone equalization via an app which utilizes equal loudness measurement techniques.
http://www.davidgriesinger.com/poster.jpg
 
jeez, sorry to veer a bit OT.

Gobos are useful but by no means a be-all-end-all solution. I've actually found they're better at reducing the effects of problematic spaces, rather than improving good spaces. But different strokes for different folks, you know?

I also know that IRCAM has a room that can dynamically change the volume and surfaces of sizable reverberant chambers, and they've been trying to do some research with things such as your triangularly-shaped-gobos, but mounted as three-sided wall panels. Needless to say, last I heard from my friends there they had too much data and too few correlations.

I am VERY familiar with WIWO (the headphone equalization program) - I spent a decent bit of time working on its code and collecting data for it. Given that David hasn't published his findings on the topic yet I'll refrain from exact details, but I WILL share with you my personal sentiments on its usage: for binaural recordings I preferred it without, and with material recorded in a traditional studio setting it makes a HUGE difference for the better.


Most of the audience-perspective live music recording done around here is very much a niche-recording endeavor compared to most other forms of recording. It is unique in many ways, not very closely related to studio recording, yet also not like traditional minimalist classical recording even though the classical model is mostly what informs the microphone techniques used.  This applies to not only the recording part of the equation (recording position and microphone techniques), but also the post-production part of things (were the mixing and finishing techniques of these recordings are quite different than mixing and mastering of eithter studio recorded or classical material).

Well as someone who is getting into taping because I am a studio jockey wanting to learn more about the live taping world, I 100% agree with you. In fact. These days I practically refuse to even casually listen to AUDs of Phish without a parametric EQ on hand (or at least a multi-band EQ, which I'll refrain from mentioning for now as it's my secret weapon bus EQ). Whenever I listen to a fresh batch of AUDs going up on eTree, I'm ALWAYS thinking about how to do post, and if I want to matrix something how I can get different tapes to play together based on what I know about mixing and production. The same thought process could technically be applied to the binaural perspective too, but the problem becomes much stickier, very quickly for reasons I don't fully have time to explain right now or with the space allotted in a forum post.

Its very interesting and informative to hear how a room behaves given different types of performance.  I've learned so much from carefully paying attention to acoustics while listening to both the live performance and recordings I've been able to make in a hall intended for medium-scale classical music when used for large-scale symphonic performance verses chamber or solo performance, versus back-line only amplified jazz combos, verses full PA amplified rock pop acts.  Inversely it's been likewise ear-opening listen with a similarly critical ear towards the room acoustics given a single chamber outfit performing in different spaces- a classical hall verses a reverberant church verses and a smallish room with a relatively low acoustical-tile suspended ceiling.  And it's really cool to listen for and identify some of the acoustic aspects Greisinger continues to explore an quantify when contrasting different audience perspectives within those spaces.  For me it takes both the knowledge of what guys like David are quantifying as well as careful listening over time for those aspects in actual venues to really connect the dots in a useful way and apply them to what we're doing here.

It isn't necessarily immediately intuitive, but I think I'd amend my statement on the room behaving differently based on performance. Rather, I will say that metrics used to analyze concert halls have different "sweet spot" or "preferable" numbers, based on the content of the performance. For example, numbers that are traditionally considered (re: Leo Beranek's seminal text Concert Halls and Opera Houses) preferable for classical music might be too... "echoic"/reflective/immersive for Opera, where you care more about actually hearing the performers' singing and words. The old-school tradeoff that tends to be made about concert halls for just classical music, is a sense of envelopment and grandiose size vs. intimate and easily-localizable.

Another addendum to my amendment, is that the room WILL behave differently for amplified music. But there are more functions than just induction of non-linear terms in the IR and perception, from the room: it's also a function of where sound sources are originating from, how active reinforcement changes the ballgame (a live band with the PA has a VERY different radiation pattern than, say, a classical orchestra, or an orchestral pit + opera singers on stage). This is compounded by concert halls with active acoustics installed, particularly for things like opera. To be honest, the jury is still out on a lot of this stuff, and many acousticians argue 'til we're blue in the face about the best way to do things.

I'll also mention this... David is sharp and I do worship the ground he walks as my introduction to DSP was through digital reverb, but I don't always agree with him when he talks about neurophysiology. He raises good points but he isn't the last word; in fact, he usually gets quite a bit of blowback at academic conferences in binaural hearing and perception, though I don't have time or space right now to get into what he tends to get criticism on or why. That said, he's a HUGE inspiration persionally into the way I think about audio systems and what is happening at pretty much every step of the chain of the system leading up to the outer ear (okay, and a few things within it too).

Variable acoustics of the space itself is a super interesting angle, both via architectural and electronic means.  Although I've not be able to specifically identify the specific auditory changes from it, I've marveled at the variable acoustics system of the largest contemporary performance hall here which uses huge variable-geometry walls and vast open space between the interior acoustic shell and the outer walls of the building.  On the electronic side, there is an interesting outdoor space here which uses a massive LARES-like system (not sure which) to create a variable virtual acoustic space in an
open outdoor park environment.

Ah, LARES. It's a phenomenal system. I'm not really allowed to talk about it.

If you like LARES or Constellation, consider looking into Logic 7 which is on all of H/K's greater-than-stereo systems, it wipes the floor with surround/ambisonics/wavefield synthesis IMO.

That said, active acoustics are kinda a different and entirely separable ballgame from the discussion at hand here. If you wanna start digging into some of those nitty-gritty detailed discussions I'm willing to play ball. But the first question I'd have for you is: between an active acoustics system being on and off, which do you prefer, and what qualities of the sound field do you think are contributing to those preferences? Note that there's no correct answer, and when there is an answer it's never straightforward or easily explained.

I credit Gresinger's ideas (along with Thiel, Williams, Wittek, Siegpiel, and a few others) as being a primary influence on the development of my own multichannel oddball microphone techniques (OMT) over the past decade, and I'm really happy that rocksuitcase and a few others here have started playing around with them as well.  One of the most important takeaways for me is the critical importance of achieving an optimal balance between direct sound, early reflections, and late reverb - and figuring out ways to try and manage that balance as much as possible given the unique constraints under which we are recording, often via unconventional multiple microphone arrays.  The fundamentals of all that forms a basis upon which I make a lot of unconventional and perhaps seemingly strange suggestions around here.  It will be fantastic to bounce some of those ideas off someone like yourself with a better understanding of the academic and mathematical side of things.  Very cool that you have a personal relationship with David who is such an authority on perceptual acoustics and something of a personal hero of mine in this field.

This is the first I'm hearing about your OMT, I'll try to dig it up with a forum search (or, if you can provide links to specific posts you'd like to throw my way, I'm all eyes at the moment man).

I'd be happy to chat any and all of your ideas here. At the end of the day, I love sound and audio systems - it's why I became an electrical engineer. I live/eat/sleep/breathe this stuff. Particularly if you need any calibration or analysis code written, that stuff is second nature to me.

Okay then, I've a request!  After he mentioned he'd make it available to interested parties, I sent a few emails to David maybe a year or so ago requesting a copy of his Windows app for frontal speaker equal loudness headphone equalization (referred to in the poster rocksuitcase linked above), yet didn't receive a response so I just let it drop.  I've thought of rigging up a test scenario myself to do the same, but have never gotten around to it.  Any chance you may be able to make that happen?  I think as tapers we can do a great deal to "close the loop" by properly equalizing our headphones to adapt to our own HRTF responses, and I'd think at least some here in the taper community who are more technically oriented would be more open to that approach than others.   We have sort of a unique position as recordists to compare the live experience with recorded event, "making the sausage" ourselves.

I've been half-wondering if any tapers will make their own ear-canal probes, but that's more of a leap!

Yeah I can make that happen, as I mentioned earlier in this post I spent a LOT of time working on WIWO's code. I'll PM you and we can take it from there.

"Closing the loop" with respect to HRTF calibration is tricky, and there's MUCH more at play with WIWO than just that calculation. But I will say I personally think it works really well.

Ear canal probes and personal HRTF/IR collections are another discussion for another day.

The presence of the human ear-brain in the perception chain does absolutely amazing things.  Compare the auditory perspective of the typical main microphone position over and just behind the conductor's head verses what is heard from the commonly prefered 5th row human listener perspective, versus what the inverse of the two would sound like.  Still there are things we can do to extend "reach" in a perceptual sense both on the recording and the playback sides of things if we discard the two-channel bottleneck and reserve binaural equivalency for the final link in the chain, but that's a whole 'nother can of worms.

And interestingly enough you've opened a bigger can of worms in the process (particularly with respect to such an "inversion filter" would sound like). One of the problems is that we can't linearize or generalize the hearing system or source-room-receiver systems; any models we try to construct using traditional mathematical methods, break down pretty quickly. We can make rough estimations in some cases and sometimes even build useful and functional models, but at the end of the day I've always found massive caveats with such systems. Note that this is also coming from someone trying desperately TO accomplish precisely that - I'm not unconvinced that my pursuit is one of madness with no closed-form solution.

I'll also mention... I'm a big supporter of dichotic and binaural listening. I do ~90% of my listening to auditory stimuli over headphones, it's just vastly preferable for my tastes. With the exception of the aforementioned Logic 7, I've yet to find a playback system better than stereo, and even still given cost and ease of setup I'll still take stereo any day of the week.

[quoting from the article you linked] I gave up using a soundfield microphone or a dummy head as a main microphone because you cannot derive a discrete center channel from them. I rely largely on Schoeps super cardioid microphones to capture as much direct sound from a section as possible with as little leakage from other sections and the fewest early reflections.  I don’t think I can convince anyone about the virtues of this type of technique, but (nearly) every commercially successful engineer of classical (or pop) music does pretty much the same thing. They would gladly do something else if it worked better.

I started out figuring how to record and reproduce a convincing ambient perspective in comparison with the live event, as that was something I found essential and very much lacking.  Over time I've moved to a a center supercardioid pointed directly at the source as an attempt to exclude everything other than the direct sound as much as possible (as much as possible being the key phrase here), in combination with other mics intended to pickup early reflections, and others intended to isolate the late reverberant component from those other things as much as possible.  In that way, four to six mics provides a limited but very useful ability to balance things which a single stereo pair can only achieve via perfect placement via careful listening, and can actually exceed that in a sort of super-stereo sense - the auditory equivalent of a zoom lens.  Not for everyone, but doable within the limitations we are saddled with.

I have huge respect for excellent 2-channel stereo recordings - they are a certain form of art, pure in their own way - and I certainly don't mean to belittle that approach in any way.  It's just much harder to do really well do to the constraints, IMHO.

You've touched on MANY interesting points here, and IMO you're thinking about the problem appropriately and with the right intent and execution. We seriously need to have an in-person conversation sometime soon about a lot of this stuff.

Live music is so 3 dimensional.

Part of what makes live music recording different than studio recording is that at it's best it is strongly dependant on conveying location acoustics in a believable way.  I mean the listeners perception of the "performance within a space", rather than simply accommodating for the effects of the local acoustics on any recording.  Although outside the interest of most around here, in that way we are constricted by the playback side of things as much as we are by the recording side of things.  For a few years I had another quote from the same article above which bombdiggity posted a link to in my signature line here at TS- Two channel stereo is limiting. It is very helpful to reproduce the sound with more than two loudspeakers. Three speakers give you twice the localization accuracy as two if you fully utilize the center, and the sweet spot is also greatly broadened. With careful multi-miking and a good surround setup an exceedingly musical mix can be made.

I feared I was going off topic, but this is directly related to conveying live acoustics convincingly.

I disagree, though all I'll say for now is: keep in mind, at the end of the day we have two ears. I don't disagree that more tracks and more directional information is a bad thing, but at the end of the day you can accomplish some insane stuff with just a two-channel output. That's part of the magic of working in the studio IMO, and that's formed and molded a lot of my habits in production: mixing down to two channels at the end of it all.
North Jersey native, Upstate veteran, proud Texan

2x Schoeps mk2; 2x Schoeps mk21; 2x Schoeps mk4

4x Schoeps cmc5; 4x Schoeps KC5; Nbob KCY; Naiant PFA

EAA PSP-2

Sound Devices Mixpre-6

Offline rocksuitcase

  • Trade Count: (4)
  • Needs to get out more...
  • *****
  • Posts: 8281
  • Gender: Male
    • RockSuitcase: stage photography
Re: Acoustics of the Shoebox
« Reply #11 on: December 05, 2017, 08:14:21 PM »
Ah, beautiful. Lots of good (if not a touch off-topic, though it IS all an important factor to the topic at hand!) info is popping up in this thread.
Live music is so 3 dimensional.

Part of what makes live music recording different than studio recording is that at it's best it is strongly dependant on conveying location acoustics in a believable way.  I mean the listeners perception of the "performance within a space", rather than simply accommodating for the effects of the local acoustics on any recording.  Although outside the interest of most around here, in that way we are constricted by the playback side of things as much as we are by the recording side of things.  For a few years I had another quote from the same article above which bombdiggity posted a link to in my signature line here at TS- Two channel stereo is limiting. It is very helpful to reproduce the sound with more than two loudspeakers. Three speakers give you twice the localization accuracy as two if you fully utilize the center, and the sweet spot is also greatly broadened. With careful multi-miking and a good surround setup an exceedingly musical mix can be made.

I feared I was going off topic, but this is directly related to conveying live acoustics convincingly.

I disagree, though all I'll say for now is: keep in mind, at the end of the day we have two ears. I don't disagree that more tracks and more directional information is a bad thing, but at the end of the day you can accomplish some insane stuff with just a two-channel output. That's part of the magic of working in the studio IMO, and that's formed and molded a lot of my habits in production: mixing down to two channels at the end of it all.
OK. re live 2 vs multi channel sound: I was going to try to rest tonight and read what you guys write, but I have to add in here as this combines my early area of research with that of my PA guru, John Meyer; which speaks directly to the point you each are making. Leaving out personal details- I found myself in John Meyer and Don Pearson's presence at AES 1986 Nashville when we were showing off our multi-channel processor (using 4 speakers-L,C,R, Center rear- to create 2 channels). John asked if we would let him have one to work with; we said yes of course. To be honest, we never spoke about it again (too long and not germane). About 2 years later, John was doing a SIM analysis at a huge club in Houston we were installing a medium size Meyer rig in. He remembered his experience with the Tri-ambient device and held a long in depth conversation with us when he explained that with all the measurements that they have done they were starting to create line arrays using amplified speaker cabinets which with proper remote stack/site reinforcement would mean two channels would be all that is necessary to "convey an accurate representation of the soundstage".
He understood the (1988-reference) theory of multi-channel, and the way our device worked (essentially creating a 2.5d illusion as opposed to a 2d one) and stood his ground that a properly time aligned and rigged 2 channel PA would more often result in better intelligibility of vocal performance and overall sound stage/instrument localization due to smearing and the presence of the negative phase signals which are what comes out of most multi channel array's rear or side channels.  His claim, with which I now concur, was that multi-channel arrays can never have as much linear time aligned signal arriving at more of the coverage area than an "ideal line array".
I'd say his advancement in Line array technology (based on earlier developments by Olson and Heil), basket/magnet technology, and SIM were all solid advancements for live PA usage and deployment.
To the OP- bombdiggity mentioned:
"The impact of the reverb from the shape of the room on differing segments of the frequency range in conjunction with the type of music lays out (and in their study measures) a number of the factors that interact before our mics even enter the picture."
For Meyer and Pearson and Healy the usage of line arrays ("Curvi-Linear), presenting the signal in as time and phase aligned as possible from the stacks with the physical shape of the array customized to the space. Using room layouts for the arenas and stadiums the GD played in, and drafting methods Pearson and Healy would design the array to fit the venue and customized their rigging to accomodate the varying angles they would use at different venues. Their goal was to anticipate "those factors which interact before our mics enter the picture" and present as clear of a "picture" of the band's stage sound as they could. There is a Healy interview where he talks about the mapping of the rooms, digging....      can't find it. I'll edit it in when I do.     
« Last Edit: December 05, 2017, 09:24:01 PM by rocksuitcase »
music IS love

When you get confused, listen to the music play!

Mics:         AKG460|CK61|CK1|CK3|CK8|Beyer M 201E|DPA 4060 SK
Recorders:Marantz PMD661 OADE Concert mod; Tascam DR680 MKI x2; Sony PCM-M10

Offline bombdiggity

  • Trade Count: (11)
  • Needs to get out more...
  • *****
  • Posts: 2277
Re: Acoustics of the Shoebox
« Reply #12 on: December 06, 2017, 12:20:02 AM »
I've been waiting for you to join TS for the past 10 years.

I can see that  ;D.

Lots to try to digest here. which will mostly have to wait until tomorrow... 

I'm a babe in the woods (no differential equations for me) though 30+ years of experience in just about every setting imaginable from arenas to classical, a ton of unamplified jazz shows and running my own "sessions" unamplified in a variety of spaces has given me opinions and experience in relation to my taste and simplified methods. 

I (and most of us in typical field recording endeavors) am for the most part left to deal with my estimation of the basic direct vs. reflected balance given the nature of the music and the instrumentation (and amplification, if applicable) in the space at hand and the limitations of where I can place equipment in said space.  I may have a few minutes to set up, usually without a soundcheck, and thus one chance to guess correctly (or less correctly) what may result via my equipment.  Learning the spaces (and musicians) makes it a bit easier, though every once in a while events will confound expectations.  We rarely have any control over the myriad variables involved, though I do believe learning the theory and the evidence provided by controlled studies will enhance our understanding and technique.  Maybe one day I'll graduate to multi-mics.  If I do any of that now it's spot micing to get an independent track for something that will be weak in the ambient mix.  I do prefer ambient placements to trying to mic instruments directly but mixing ambient sources seems above my level of ambition at present.  Maybe one of these days I'll have time and the inclination to step it all up... 
Gear:
Audio:
Schoeps MK4V
Nak CM-100/CM-300 w/ CP-1's or CP-4's
SP-CMC-25
>
Oade C mod R-44  OR
Tinybox > Sony PCM-M10 (formerly Roland R-05) 
Video: Varied, with various outboard mics depending on the situation

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #13 on: December 06, 2017, 01:40:46 PM »
I disagree, though all I'll say for now is: keep in mind, at the end of the day we have two ears. I don't disagree that more tracks and more directional information is a bad thing, but at the end of the day you can accomplish some insane stuff with just a two-channel output. That's part of the magic of working in the studio IMO, and that's formed and molded a lot of my habits in production: mixing down to two channels at the end of it all.

The key phrase for me in the statement above is "at the end of it all".  Yes, in the end it's all about the two signals reaching the eardrums.  IMHO the two channel bottleneck arises from trying to simplify the vast complexities prior to that point.  A common claim around here dismissing multichannel recording is - "two mics, two ears", which over simplifies things and blithely ignores the complexities of the multi-dimentional problem before distilling everything down to those two signals.  But agreed, its astounding how much we can with two channels, as willing participants in the illusion, supsending disbelief.   As  much as I enjoy messing around with multichannel playback to achieve a more convincing and immersive illusion, I've been really excited to find the same techiques also make for superior 2-channel mixes.  Especially for headphone playback, which is going to be the only practical solution for most folks.

I (and most of us in typical field recording endeavors) am for the most part left to deal with my estimation of the basic direct vs. reflected balance given the nature of the music and the instrumentation (and amplification, if applicable) in the space at hand and the limitations of where I can place equipment in said space.  I may have a few minutes to set up, usually without a soundcheck, and thus one chance to guess correctly (or less correctly) what may result via my equipment.  Learning the spaces (and musicians) makes it a bit easier, though every once in a while events will confound expectations.  We rarely have any control over the myriad variables involved, though I do believe learning the theory and the evidence provided by controlled studies will enhance our understanding and technique. 

This is basically one of my arguments for recording using a multiple microphone array which discretely records the sound arriving from each direction.  We don't have the luxury of arranging and re-adjusting things or even listening through the recording chain prior to the recording being made in most cases.  Typically at best I check to make sure I have signal, levels are good, and the channel routing is correct, but I don't hear it until I get home.  Recording this way helps minimize that disadvantage somewhat.   I'm not trying to talk you into doing this stuff, just 'splainin' how it can help address those basic "taper issues".

To me this lack of control over things is one of the main differences in "taper recording" as contrasted with other forms of recording, and it applies to all types of music, amplified as well as acoustic.  The other main difference is the oddity of recording a PA illuminating the room as primary source for amplified stuff.

OK. re live 2 vs multi channel sound [snip..]

Interesting stuff.  We've got a lot of different angles going in this discussion simultaneously.  I'll just posit that live sound reinforcement for a large audience is a very different animal than small scale reproducition of the event at home or the like.

To my way of thinking, it helps to consider the event, the recording, and the reproduction process as three entirely separate things without much overlap between them.   The venue acoustics and everything which happens with regard to producing and hearing sound in that space is one thing, recording it and manipulating that recording is another, and reproduction yet another. 

Yes, in some ways speaker reproduction resembles PA reinforcement, but I think we need to be very careful in trying to adapt what we know about large venue acoustics to small room reproduction and vice versa.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

Offline Gutbucket

  • record > listen > revise technique
  • Trade Count: (15)
  • Needs to get out more...
  • *****
  • Posts: 15700
  • Gender: Male
  • "Better to love music than respect it" ~Stravinsky
Re: Acoustics of the Shoebox
« Reply #14 on: December 06, 2017, 01:49:26 PM »
Quote
If you like LARES or Constellation, consider looking into Logic 7 which is on all of H/K's greater-than-stereo systems, it wipes the floor with surround/ambisonics/wavefield synthesis IMO. 

Read much about and have long wanted to play around with Logic 7, partly as a potential tool for re-synthesizing missing channels in some of my surround recordings which have technical issues, and also possibly as a way of distributing a single recorded surrund channel to multiple surround speakers.  Is it available in any other format other than the H/K reciever hardware?  Did they ever release a VST or some software based way of using Logic 7?

Quote
This is the first I'm hearing about your OMT, I'll try to dig it up with a forum search (or, if you can provide links to specific posts you'd like to throw my way, I'm all eyes at the moment man).

I'd be happy to chat any and all of your ideas here. At the end of the day, I love sound and audio systems - it's why I became an electrical engineer. I live/eat/sleep/breathe this stuff. Particularly if you need any calibration or analysis code written, that stuff is second nature to me.

Great! The OMT ideas evolved over the past 10 years, basically my experimentation with non-traditional recording and playback techniques steered by what I learned from the researchers and engineers who understand acoustics and recording, but applied to "taper recording" with the aforementioned unique constraints that entails.

Lots of various threads, but the main one is here- https://taperssection.com/index.php?topic=96009.0
^
That's been sort of a long-running blog-thread more or less explaining how I arrived at the arrays I'm using now. Oddball Microphone Technique began as a catchall phrase for anything atypical (other than straight 2-ch X/Y, M/S, near-coincident or spaced omnis) but grew to refer more specifically to a single multichannel array intended to record direct arrival and ambient hall/audience sound (and in later instances, early reflections) somewhat separately, at least as much as practical.  That it is equally advantageous for 2-channel speaker, headphone reproduction, and multi-channel playback has been tied in all along with that development process (I've been happy to find), yet is essentially secondary to the recording process itself.

Quote
I'll also mention... I'm a big supporter of dichotic and binaural listening. I do ~90% of my listening to auditory stimuli over headphones, it's just vastly preferable for my tastes. With the exception of the aforementioned Logic 7, I've yet to find a playback system better than stereo, and even still given cost and ease of setup I'll still take stereo any day of the week.

What do you mean by dichotic listening?  Agreed on headphones being a good solution, and by far the most practical.  What I can do with multichannel playback is separate the audience and room sound from the direct sound and reflections coming from the stage direction.  Yes, that allows me to turn around or listen sitting sideways and things remain spatially correct with depth in each dimension.  But more than that it actually relaxes the need for a pin-drop quiet audience.  It may seem ironic, but recording and reproducing the audience and room in a believable way allows one to hear the music more clearly by making it possible to separate and differentiate everything going on.  It provides the cues necessary for the ear-brain to do it's cocktail party effect thing.  Which gets back to the "mics need to be positioned closer than a human listener" thing, and actually relaxes that requirement somewhat - a big advantage when we are so limited in where we can setup as tapers.
musical volition > vibrations > voltages > numeric values > voltages > vibrations> virtual teleportation time-machine experience
Better recording made easy - >>Improved PAS table<< | Made excellent- >>click here to download the Oddball Microphone Technique illustrated PDF booklet<< (note: This is a 1st draft, now several years old and in need of revision!  Stay tuned)

 

RSS | Mobile
Page created in 0.095 seconds with 44 queries.
© 2002-2024 Taperssection.com
Powered by SMF