chris319, when the noise in a channel is already well below the level of the noise being fed into that channel, you won't get any distinct audible benefit from reducing the channel's noise level still further.
The concept of quantization error which you state with such admirable firmness is unfortunately not correct. If it were, then a midrange audio tone (say, 1 kHz) recorded at a very low level (say, -80 dBFS) on a 16-bit system would have significant harmonic distortion--overtones that shouldn't be present. This distortion would increase still further if the level of the signal were to decrease, and ultimately the sine wave would turn into a square wave as only a single bit (the "least significant bit") is toggled on and off. On a 24-bit system, the distortion at any given recording level should be considerably lower, since more bits are being used to characterize the level of the signal at any given moment; more details of the original signal are preserved, the "stairsteps" are smaller, the curves smoother, etc., etc., etc. ...
Does that description sound familiar? It's based on the way that many people imagine that digital audio works. And it's provably mistaken. If you set this all up as an experiment, and listen to and/or measure the results (which almost anyone on this forum can do in about ten minutes), NONE of the predictions hold up.
Distortion doesn't generally increase at lower recording levels, for example (that being mainly a matter of converter linearity, which is separate from "resolution"). Low-level tones don't become more square-wave-like, if you look at the analog output of the recorder (which is what counts, since we only hear analog).
The sad thing is, this was all demonstrated publicly and explained mathematically 25+ years ago by now--but people who don't know about it are still "correcting" other people who really have it right.
--best regards