i think the point i was trying to make was not received in the manner intended. my claim that a recording captured at 24/96 sounds more open and easier on the ears is not the same as saying that 24/96 is better quality.
I have no problem with that kind of quality claim. Quality is a pretty broadly defined term. More open and easier on the ears certainly equates to better quality in my opinion.
However your comments about needing different level settings and increased headroom are things which are more or less at odds with the technical aspects of varying the sample rate. Those attributes are things one might expect to find with a change of bit-depth - say going from 16 to 24 bits. A change of sample rate alone should not technically affect issues of level or dynamics in any significant way when recording music. That is not to say the differences you perceived are not real, it is rather a question of to what those differences you most certainly heard should be attributed.
Be careful in making sure you are really comparing apples to apples before drawing conclusions between fruit. Without comparing files made on identical machines recording the same feed at different rates there are a multitude of extraneous variables that can easily overshadow a meaningful comparison. The difficult part is minimalizing the influence of other variables except sample rate to a practical extent. If you are constrained to using one recorder for the test, at the very least change sample rate settings at some point during a break in the performance so that many of the other variables go unchanged (the band, style of music, the room, the levels, hopefully the number of persons in the audience, etc). Even then, you aren't comparing the exact same recorded segment. Yet running that test a few times before reaching a decision may lead to a more clear conclusion. Consider recording the first part at 96kHz and the second part at 48kHz one evening, then reverse that another, to help cancel out the natural first-half / second-half bias towards better playing, excitement and more dialed in sound later in the program - which is just one variable still left among many in that situation.
Regardless of the technicalities of sample rate conversion and filtering (of which I would not be surprised to find my admittedly limited understanding may be either incorrect or outdated by 20 years) what interests me in a practical sense is still this-
[self quote from earlier in the thread]
What I wonder about is the original capture conversion of 48 vs 96 kHz using my particular equipment.. not because I think any ultrasonic information may be audible or that the high-quality resampling I can do on the computer may be audible, but because the recording equipment I'm using is modest and the ADCs in the recorders may perform better at 96 than 48 (or vice-versa) for a number of reasons. I don't have to fully understand the technicalites of SRC implementations to test that, only understand the problem and how to minimilaize the pitfalls involved in running a reliable test. I’ve not made all the effort to set up such a test, but my working conclusion is that I have yet to notice any sonic difference between recordings I’ve made at 48 and 96kHz which I can attribute to the difference in saple rate.