You have my deepest symphaty DSAtz. Against ignorance even the Gods themselves fight agains in vain.
Way I see it: 16/44 has in many tests proven to be undistinguishable from live analog feed, or from so called hi-resolution formats. Remember the recent AES SACD versus CD test for example. Old cassettes have bad noise levels, bad freqency range, were recorded with so-so microphones. Now some people think that to copy these ancient tapes they need resolutions even modern mic/preamp/ADC systems can not fully ulilize for live recording. Are the tapes really BETTER than live sound today? Something awfull must have happend to the air!
By the way, I have the 16/44 versus 24/96 test file availabe at http://hosted.filefront.com/Jullepoika/
With that you can test if your ears and systems can make any difference between a hi-resolution format and plain old CD quality. Download also the explanation TXT file.
I have been closely following the 16 bit 44.1 kHz vs all other formats for many years. I came across a paper by David Griesinger (Lexicon DSP engineer) and this lead me to his site, very interesting (although cluttered) ...
http://world.std.com/~griesngr/There's a lot of text...but 1/2 way down...
"And now for something completely different... Being currently over 60, and having in my youth studied information theory, I have a low tolerance for claims that "high definition" recording is anything but a marketing gimmick. I keep, like the Great Randi, trying to find a way to prove it. Well, I got the idea that maybe some of the presumably positive results on the audibility of frequencies above 18000Hz were due to intermodulation distortion, that would covert energy in the ultrasonic range into sonic frequencies. So I started measuring loudspeakers for distortion of different types - and looking at the HF content of current disks. The result is the paper below, which is a HOOT! Anytime you want a good laugh, take a read.
Slides from the AES convention in Banff on intermodulation distortion in loudspeakers and its relationship to "high definition" audio.
http://world.std.com/~griesngr/intermod.pptConclusions -
1. Adding ultrasonics to a recording technique does NOT improve time resolution of typical signals – either for imaging or precision of tempo. The presumption that it does is based on a misunderstanding of both information theory and human physiology.
2. Karou and Shogo have shown that ultrasonic harmonics of a 2kHz signal are NOT audible in the absence of external (non-human) intermodulation distortion. (This BTW: means they can't be heard in the real world and that filtering them from the recording is a good thing as they can only do harm).
3. Their experiments put a limit on the possibility that a physiological non-linearity can make ultrasonic harmonics perceptible. They find that such a non-linearity does not exist at ultrasonic sound pressure levels below 80dB.
4. All commercial recordings tested by the author as of 6/1/03 contained either no ultrasonic information, or ultrasonic harmonics at levels more than 40dB below the fundamentals.
5. Our experiments suggest that the most important source of audible intermodulation for ultrasonics is the electronics, not in the transducers.
Some consumer grade equipment makes a tacit admission of the inaudibility of frequencies above 22kHz by simply not reproducing them. Yet the advertising for these products claims the benefits of “higher resolution.”
6. Even assuming ultrasonics are audible, loudspeaker directivity creates an unusually tiny sweet spot, both horizontally and vertically.
7. A/B blind listening remains the gold standard for audio comparison.
digifish