But on the bit depth thing you are right in one sense that it's a matter of perspective and could be set up either way, but the reality is that 'full range' is set by the engineering standards of the medium and we don't get to change it at will. Theoretically we could make an 8 bit system with a 100dB dynamic range which has very coarse loudness changes, or 32bit sytem with a total range of 50dB and way more incremental levels than necessary, but neither are real world options.
I think we can choose the "full range" of the A/D for our application, or amplify/attenuate the signal to match the A/D. On the industrial side, I choose a 0-10v A/D to read a pressure transducer where 0-10v=0-100psi, but I use a different A/D to read millivolts and thermocouples. The pressure transducer probably has a strain gauge on a diaphragm instead of a capacitance based capsule, but that tiny signal is amplified to match my A/D. I could plug a ribbon mic directly into the back of my HD24... but it's a poor application because I know the mics output is small compared to my A/D's input range of +22dbu. Most of us choose a packaged audio system, like a V3, or an R44 recorder, and we let the manufacturer make those choices for us. I have no idea what it's internal A/D range is, and I don't really care, I adjust that analog section to match the signal to the A/D's range. It's all just bits between +/- full scale.
Going back to my previous statements (2 pages ago) I'm really isolating just the A/D and disregarding any analog portion of the circuit. Dynamic range and frequency response are attributes of the analog signal, either before the A/D or the digital equivalent after conversion. The A/D just plots those XY points as accurately as it can at any given moment in time. If it's a positive voltage you get positive numbers, negative voltage = negative numbers. If the signal has low amplitude, you end up with small numbers, and you sample that rapidly. What someone said a couple of pages ago was that 24bit by itself gives you more headroom, I say no... headroom belongs to the analog realm... I adjust the analog to match the A/D, headroom helps avoid clipping regardless of the A/D range. If you want to mess with the signal do it at the analog stage before the A/D, or in a computer after the A/D. But the A/D stage should be as accurate as possible.
I need to amend my statement about "hiss", where I assumed a real world practicality to "tapers" (99% of the people on this board), but I shouldn't have generalized. In any kind of live setting there is considerable noise in the room... whether it's chatty drunks, or people sitting quietly listening... there is no such thing as "dead quiet". People are breathing and other stuff... and that is generally louder than the self noise of my mics or preamp... I never come close to 96db dynamic range, so running 16bit with 6 or 12 db of headroom is fine. That applies to 99% of us, but not all. In a studio with good analog gear, the quantization noise from 16 bit could get you, and then dithering is more important. My primary beef is with the posts who make the 16bit tapers feel like second class citizens... they keep regurgitating the same rhetoric fed to them without thinking or understanding, and it's not relevant to most of the people on this board. For most of us, a 16bit A/D would not be the weakest link in the chain.
If the change of signal amplitude value stored by each bit wasn't the same with 16 and 24 bit files, then we'd need algorithms to bit rate convert from one depth to the other. Simply truncating the least significant bits wouldn't work. If the amplitude value stored by each bit was different, we'd get the dynamic equivalent of playing back the file at the wrong sample rate- instead of pitch and speed change we'd get dynamic range expansion or compression.
Again it's all just percentage from +/- max, like sine wave values range from -1 to +1. The algorithms to convert from one bit depth to another are so routine we just take them for granted. For 16 > 24 bit, you simply pad with nulls. That's an algorithm. 24bit > 16 you can truncate or apply a wide variety of dithering algorithms. Stored in a wav file header is a block which tells you if you are reading 16bit or 24bit data... get that header wrong, and its scrambled digi-noise. As I recall the 24bit RIFF file format is something like 16bit data for Left, 16bit data for right, than the 8 bits of LSD for right, and left. Which is where my other analogy of 16bit data with 1/256 fractions comes in. I'm familiar with it because I wrote code which rips through files to create what I hoped would be a better declipper/mousetrap (trying to save a particular recording, which didn't help much).