aaronji, howdy. As for why I mentioned post-processing in my reply to you, please look again at the earlier message of yours that I replied to. It quotes a message from ycoop as follows: "What about the extra 8 bits allows for more work in post without losing as much quality?"--to which you replied that essentially, more bits in a/d conversion = lower quantization error. To which, in turn, I replied that no, that's not a valid general statement; it all depends on the noise in the signal being converted.
If that noise isn't correlated with the desired signal, and is entirely above the noise floor of the converter, then more bits don't (can't!) improve the accuracy of the conversion. Hypothetically and simplistically, if the noise floor of a signal is at a 13-bit level, you could convert that signal with a 14-bit, 16-bit, 20-bit or 24-bit converter all of which would give equal audible results (all other things being equal), because the noisy signal was the limiting factor, not the converter.
However, if the noise IS correlated with the desired signal, then it is a form of non-linear distortion and you've got yourself a defective converter. Most discussion assumes tacitly that the residual noise of a converter will be uncorrelated with the signal, because for decades now, that has normally been the case. That is another way of saying that the non-linear distortion is very low.
Still, the older ones among us, or anyone who's ever used an inadequately dithered A/D converter, will know what that kind of distortion sounds like, because some of the earliest available digital audio recording equipment didn't dither properly (ahem Sony, including their professional PCM-1600 which a lot of early CDs were recorded with). Some people describe it as "noise breathing" or "noise pumping" because it rises and falls with the signal levels, particularly on very soft sounds such as reverberation decay (or imagine taking a 400 Hz tone and fading it down slowly to nothingness with a smooth, analog fader). An accompanying phenomenon is (or was) "digital deafness"--signals below the scope of the lowest-order bit are simply blanked out.
Unfortunately, the way a lot of people think about digital audio is still based on the incredible flood of bullshit that was unleashed when digital audio was introduced around 1980, and which (when it was about anything real) harped greatly on shortcomings of conversion that were really due to not having proper dither. When an a/d converter is properly dithered, there's no increase in distortion at lower levels, no "stair-step levels" to be minimized by striving for more and more bits, nor is there "digital deafness" at the bottom of the range; it acts just like an analog channel as far as dynamic range is concerned. Since actual distortion in modern a/d converters is so low, I tend to shift the discussion away from the concept of "precision" in conversion (which would be a more appropriate concept if a signal were utterly noiseless, such as a computationally generated test signal) to dynamic range, which is much more pertinent to live, recorded audio.
--best regards