Hi, uncleyug. Let's try to reason through this carefully with a practical example.
Let's say I set up a pair of microphones to record a concert. I've been to a rehearsal beforehand, so I know what to expect as far as the sound levels are concerned--and as usual, I've allowed 3 or 4 dB of headroom for last-minute enthusiasm on the part of the musicians. From then on, the dynamic range of the recording is up to whatever the musicians decide to do. Whether they use that 3 or 4 dB at the top or not, the recording might be sonically first-rate or not, depending on all the usual variables.
However, 3 - 4 dB is easily audible as a difference in loudness--and even a 1 dB difference or less in playback levels can have a real impact on people's opinions about the quality of a recording. Hi-fi salespeople have known this for years: If you can even sneak the level up just 1/2 or 3/4 of a dB on the pair of loudspeakers you want the customer to buy, people start hearing "more detail" and "more emotion" and "better imaging" and "more soundstage depth"--all sorts of qualitative differences, rather than noticing, "Gee, it sounds a tiny bit louder than it did a moment ago."
So if there's 3 or 4 dB difference between my recording and (say) one that the client got from another engineer a month before, then--well, my recording might be worlds better sounding if the client would only make the comparison with the volume levels set scrupulously equal in playback. But the average musician who's not an engineer will simply listen to one recording, listen to the other, and make a snap judgment: My recording just doesn't have the "life" and "energy" of the other (louder) recording.
By the way, this kind of situation has led many good people to firm but mistaken conclusions about all kinds of things in audio. And those people really have heard what they say they've heard--but the uncontrolled conditions make it quite impossible for them to know why things sounded as they did.
I mean--on one level I really can't argue with the client; he pops in the other guy's CD, and he pops in mine, and the other guy's CD sounds better. If there's something I can do to make my work sound better in such a comparison, the client naturally assumes that I should already have done it--what am I waiting for?
So if I'm in one of those situations where I absolutely must make things sound as good as possible in a comparison, I will recopy my recording so that it peaks at -1 dB or even -1/2 dB below digital full scale. And the moment that kind of signal processing comes into the picture, it's better if I'm starting from a 20- or 24-bit live recording.
Think it through: If I start from a 16-bit live original, then copy that in order to raise the levels by 3.5 dB, then in the hypothetical best case, the noise floor of the result would be 3.5 dB higher than before. But if I'm starting from an 18- or 20- or 24-bit live recording, then the noise floor of the recording doesn't increase at all when I boost the gain by 3.5 dB. The resulting CD will in fact have 3.5 dB wider dynamic range than if I boost a 16-bit recording by the same amount.
If you want to make the comparison more realistic by considering dither, then you can simply consider the dither itself to be the digital noise floor of the recording. The one which starts from a dithered 16-bit original will still be 3.5 dB noisier than the one that starts from dithered 18, 20 or 24 bits.
What mitigates this considerably in many cases is the acoustical noise in the recording venue--but that isn't constant at all times or all frequencies. Our ears and brains are most sensitive in the 2 - 5 kHz region and there, every dB and every bit really counts sometimes, at least in the classical recording that I mainly do.
--best regards