Ah, beautiful. Lots of good (if not a touch off-topic, though it IS all an important factor to the topic at hand!) info is popping up in this thread.
For physically altering the timing of early reflections, when designing small to medium studio spaces, or performance spaces, a product which is an interchangeable GoBo or a flippable triangular GoBo installed into the side walls is what Vincent van Haaff found useful to essentially allow the end user to change the room acoustics based on type of audio, number of musicians, type of instruments etc.
https://vintageking.com/blog/2016/05/vincent-van-haaff/ Note the fixture I have circled in the below photo- this is a changeable, three sided GoBo with each side of the triangle built of a different type of reflecting or absorbing material(s)-i.e. absorptive material of a specific absorption co-efficient on 1 side, wooden slats of differing widths on one side, and both slats and absorbers on the third side. We applied for a patent of this style of design with him and the inventor, Gordon Merrick, but it didn't get approved (probably many similar design ideas, or prior art, before our application). I looked for a better photo, but can't locate; will update when I can
I note Griesinger's recent work on binaural reproduction and headphone equalization. Very interesting and current to the way many people listen. He is using what he terms "his avatar" - a fully anthropromorphic copy of his pinna, ear canals, and eardrums then individual users create their own headphone equalization via an app which utilizes equal loudness measurement techniques.
http://www.davidgriesinger.com/poster.jpg
jeez, sorry to veer a bit OT.
Gobos are useful but by no means a be-all-end-all solution. I've actually found they're better at reducing the effects of problematic spaces, rather than improving good spaces. But different strokes for different folks, you know?
I also know that IRCAM has a room that can dynamically change the volume and surfaces of sizable reverberant chambers, and they've been trying to do some research with things such as your triangularly-shaped-gobos, but mounted as three-sided wall panels. Needless to say, last I heard from my friends there they had too much data and too few correlations.
I am VERY familiar with WIWO (the headphone equalization program) - I spent a decent bit of time working on its code and collecting data for it. Given that David hasn't published his findings on the topic yet I'll refrain from exact details, but I WILL share with you my personal sentiments on its usage: for binaural recordings I preferred it without, and with material recorded in a traditional studio setting it makes a HUGE difference for the better.
Most of the audience-perspective live music recording done around here is very much a niche-recording endeavor compared to most other forms of recording. It is unique in many ways, not very closely related to studio recording, yet also not like traditional minimalist classical recording even though the classical model is mostly what informs the microphone techniques used. This applies to not only the recording part of the equation (recording position and microphone techniques), but also the post-production part of things (were the mixing and finishing techniques of these recordings are quite different than mixing and mastering of eithter studio recorded or classical material).
Well as someone who is getting into taping because I am a studio jockey wanting to learn more about the live taping world, I 100% agree with you. In fact. These days I practically refuse to even casually listen to AUDs of Phish without a parametric EQ on hand (or at least a multi-band EQ, which I'll refrain from mentioning for now as it's my secret weapon bus EQ). Whenever I listen to a fresh batch of AUDs going up on eTree, I'm ALWAYS thinking about how to do post, and if I want to matrix something how I can get different tapes to play together based on what I know about mixing and production. The same thought process could
technically be applied to the binaural perspective too, but the problem becomes much stickier, very quickly for reasons I don't fully have time to explain right now or with the space allotted in a forum post.
Its very interesting and informative to hear how a room behaves given different types of performance. I've learned so much from carefully paying attention to acoustics while listening to both the live performance and recordings I've been able to make in a hall intended for medium-scale classical music when used for large-scale symphonic performance verses chamber or solo performance, versus back-line only amplified jazz combos, verses full PA amplified rock pop acts. Inversely it's been likewise ear-opening listen with a similarly critical ear towards the room acoustics given a single chamber outfit performing in different spaces- a classical hall verses a reverberant church verses and a smallish room with a relatively low acoustical-tile suspended ceiling. And it's really cool to listen for and identify some of the acoustic aspects Greisinger continues to explore an quantify when contrasting different audience perspectives within those spaces. For me it takes both the knowledge of what guys like David are quantifying as well as careful listening over time for those aspects in actual venues to really connect the dots in a useful way and apply them to what we're doing here.
It isn't necessarily immediately intuitive, but I think I'd amend my statement on the room behaving differently based on performance. Rather, I will say that metrics used to analyze concert halls have different "sweet spot" or "preferable" numbers, based on the content of the performance. For example, numbers that are traditionally considered (re: Leo Beranek's seminal text
Concert Halls and Opera Houses) preferable for classical music might be too... "echoic"/reflective/immersive for Opera, where you care more about actually hearing the performers' singing and words. The old-school tradeoff that tends to be made about concert halls for just classical music, is a sense of envelopment and grandiose size vs. intimate and easily-localizable.
Another addendum to my amendment, is that the room WILL behave differently for
amplified music. But there are more functions than just induction of non-linear terms in the IR and perception, from the room: it's also a function of where sound sources are originating from, how active reinforcement changes the ballgame (a live band with the PA has a VERY different radiation pattern than, say, a classical orchestra, or an orchestral pit + opera singers on stage). This is compounded by concert halls with active acoustics installed, particularly for things like opera. To be honest, the jury is still out on a lot of this stuff, and many acousticians argue 'til we're blue in the face about the best way to do things.
I'll also mention this... David is sharp and I do worship the ground he walks as my introduction to DSP was through digital reverb, but I don't always agree with him when he talks about neurophysiology. He raises good points but he isn't the last word; in fact, he usually gets quite a bit of blowback at academic conferences in binaural hearing and perception, though I don't have time or space right now to get into what he tends to get criticism on or why. That said, he's a HUGE inspiration persionally into the way I think about audio systems and what is happening at pretty much every step of the chain of the system leading up to the outer ear (okay, and a few things within it too).
Variable acoustics of the space itself is a super interesting angle, both via architectural and electronic means. Although I've not be able to specifically identify the specific auditory changes from it, I've marveled at the variable acoustics system of the largest contemporary performance hall here which uses huge variable-geometry walls and vast open space between the interior acoustic shell and the outer walls of the building. On the electronic side, there is an interesting outdoor space here which uses a massive LARES-like system (not sure which) to create a variable virtual acoustic space in an
open outdoor park environment.
Ah, LARES. It's a phenomenal system. I'm not really allowed to talk about it.
If you like LARES or Constellation, consider looking into Logic 7 which is on all of H/K's greater-than-stereo systems, it wipes the floor with surround/ambisonics/wavefield synthesis IMO.
That said, active acoustics are kinda a different and entirely separable ballgame from the discussion at hand here. If you wanna start digging into some of those nitty-gritty detailed discussions I'm willing to play ball. But the first question I'd have for you is: between an active acoustics system being on and off, which do you prefer, and what qualities of the sound field do you think are contributing to those preferences? Note that there's no correct answer, and when there is an answer it's never straightforward or easily explained.
I credit Gresinger's ideas (along with Thiel, Williams, Wittek, Siegpiel, and a few others) as being a primary influence on the development of my own multichannel oddball microphone techniques (OMT) over the past decade, and I'm really happy that rocksuitcase and a few others here have started playing around with them as well. One of the most important takeaways for me is the critical importance of achieving an optimal balance between direct sound, early reflections, and late reverb - and figuring out ways to try and manage that balance as much as possible given the unique constraints under which we are recording, often via unconventional multiple microphone arrays. The fundamentals of all that forms a basis upon which I make a lot of unconventional and perhaps seemingly strange suggestions around here. It will be fantastic to bounce some of those ideas off someone like yourself with a better understanding of the academic and mathematical side of things. Very cool that you have a personal relationship with David who is such an authority on perceptual acoustics and something of a personal hero of mine in this field.
This is the first I'm hearing about your OMT, I'll try to dig it up with a forum search (or, if you can provide links to specific posts you'd like to throw my way, I'm all eyes at the moment man).
I'd be happy to chat any and all of your ideas here. At the end of the day, I love sound and audio systems - it's why I became an electrical engineer. I live/eat/sleep/breathe this stuff. Particularly if you need any calibration or analysis code written, that stuff is second nature to me.
Okay then, I've a request! After he mentioned he'd make it available to interested parties, I sent a few emails to David maybe a year or so ago requesting a copy of his Windows app for frontal speaker equal loudness headphone equalization (referred to in the poster rocksuitcase linked above), yet didn't receive a response so I just let it drop. I've thought of rigging up a test scenario myself to do the same, but have never gotten around to it. Any chance you may be able to make that happen? I think as tapers we can do a great deal to "close the loop" by properly equalizing our headphones to adapt to our own HRTF responses, and I'd think at least some here in the taper community who are more technically oriented would be more open to that approach than others. We have sort of a unique position as recordists to compare the live experience with recorded event, "making the sausage" ourselves.
I've been half-wondering if any tapers will make their own ear-canal probes, but that's more of a leap!
Yeah I can make that happen, as I mentioned earlier in this post I spent a LOT of time working on WIWO's code. I'll PM you and we can take it from there.
"Closing the loop" with respect to HRTF calibration is tricky, and there's MUCH more at play with WIWO than just that calculation. But I will say I personally think it works really well.
Ear canal probes and personal HRTF/IR collections are another discussion for another day.
The presence of the human ear-brain in the perception chain does absolutely amazing things. Compare the auditory perspective of the typical main microphone position over and just behind the conductor's head verses what is heard from the commonly prefered 5th row human listener perspective, versus what the inverse of the two would sound like. Still there are things we can do to extend "reach" in a perceptual sense both on the recording and the playback sides of things if we discard the two-channel bottleneck and reserve binaural equivalency for the final link in the chain, but that's a whole 'nother can of worms.
And interestingly enough you've opened a bigger can of worms in the process (particularly with respect to such an "inversion filter" would sound like). One of the problems is that we can't linearize or generalize the hearing system or source-room-receiver systems; any models we try to construct using traditional mathematical methods, break down pretty quickly. We can make rough estimations in some cases and sometimes even build useful and functional models, but at the end of the day I've always found massive caveats with such systems. Note that this is also coming from someone trying desperately TO accomplish precisely that - I'm not unconvinced that my pursuit is one of madness with no closed-form solution.
I'll also mention... I'm a big supporter of dichotic and binaural listening. I do ~90% of my listening to auditory stimuli over headphones, it's just vastly preferable for my tastes. With the exception of the aforementioned Logic 7, I've yet to find a playback system better than stereo, and even still given cost and ease of setup I'll still take stereo any day of the week.
[quoting from the article you linked] I gave up using a soundfield microphone or a dummy head as a main microphone because you cannot derive a discrete center channel from them. I rely largely on Schoeps super cardioid microphones to capture as much direct sound from a section as possible with as little leakage from other sections and the fewest early reflections. I don’t think I can convince anyone about the virtues of this type of technique, but (nearly) every commercially successful engineer of classical (or pop) music does pretty much the same thing. They would gladly do something else if it worked better.
I started out figuring how to record and reproduce a convincing ambient perspective in comparison with the live event, as that was something I found essential and very much lacking. Over time I've moved to a a center supercardioid pointed directly at the source as an attempt to exclude everything other than the direct sound as much as possible (as much as possible being the key phrase here), in combination with other mics intended to pickup early reflections, and others intended to isolate the late reverberant component from those other things as much as possible. In that way, four to six mics provides a limited but very useful ability to balance things which a single stereo pair can only achieve via perfect placement via careful listening, and can actually exceed that in a sort of super-stereo sense - the auditory equivalent of a zoom lens. Not for everyone, but doable within the limitations we are saddled with.
I have huge respect for excellent 2-channel stereo recordings - they are a certain form of art, pure in their own way - and I certainly don't mean to belittle that approach in any way. It's just much harder to do really well do to the constraints, IMHO.
You've touched on MANY interesting points here, and IMO you're thinking about the problem appropriately and with the right intent and execution. We seriously need to have an in-person conversation sometime soon about a lot of this stuff.
Live music is so 3 dimensional.
Part of what makes live music recording different than studio recording is that at it's best it is strongly dependant on conveying location acoustics in a believable way. I mean the listeners perception of the "performance within a space", rather than simply accommodating for the effects of the local acoustics on any recording. Although outside the interest of most around here, in that way we are constricted by the playback side of things as much as we are by the recording side of things. For a few years I had another quote from the same article above which bombdiggity posted a link to in my signature line here at TS- Two channel stereo is limiting. It is very helpful to reproduce the sound with more than two loudspeakers. Three speakers give you twice the localization accuracy as two if you fully utilize the center, and the sweet spot is also greatly broadened. With careful multi-miking and a good surround setup an exceedingly musical mix can be made.
I feared I was going off topic, but this is directly related to conveying live acoustics convincingly.
I disagree, though all I'll say for now is: keep in mind, at the end of the day we have two ears. I don't disagree that more tracks and more directional information is a bad thing, but at the end of the day you can accomplish some insane stuff with just a two-channel output. That's part of the magic of working in the studio IMO, and that's formed and molded a lot of my habits in production: mixing down to two channels at the end of it all.