dmonkey (and RobertNC), if my preamp behaved in the manner just described:
> When you turn up the knob on that box you are doing a lot more than just "adding dB in real time". You are changing the operating conditions of the analog stages in the box, and that is changing how the unit performs. ... It's a complicated combination of preferences and conditions.
... I would consider it a bad choice of preamp, if not defective. However, there are many audio engineers who wouldn't necessarily feel that way. There's a deep difference of attitude about this kind of thing, and that's what I'd like to alert you to. If you're not aware of it, you'll probably have a hard time making sense of the conflicting advice that people offer you. Let me try to outline the two viewpoints as fairly as I can--though I'm on one side and not the other, so I may not do full justice to the side that I don't agree with.
One side believes fundamentally that our ears are more sensitive than the best audio measuring equipment, and that there's no such thing as a sonically neutral audio component. According to them, every audio component (even a microphone cable or a line-level "interconnect") has its own "sonic signature," and when you choose microphones, preamps, A/D converters, recorders and even microphone cables and "interconnects," you're like a chef who blends ingredients so as to create a specific flavor experience.
Often the people in this group say that they consider microphones, preamps, mixing consoles, etc. to be like musical instruments. They consider their own work to be a form of direct participation in an artistic event.
The other side says that it is possible to have sonically neutral audio components in at least some cases--or to come so close that certain items of equipment effectively "drop out of the equation" as variables. If you're careful, these people believe, you can find neutral-sounding preamps, cables and digital recording devices. Even if they're not perfectly neutral sounding, they can be close enough that the remaining variables (such as room acoustics, or microphones and their placement) overwhelm them by orders of magnitude.
In general, the second group of people prefers preamps and other electronic components that they consider close to this ideal. They consider a "flavored" preamp or converter to be like the sum of a neutral preamp or converter plus a "flavoring" component that ought to be optional. In general, these people want to let the musicians be the artists; they're just trying to record what the musicians are doing.
Please note that the second group of people doesn't say that "all preamps sound alike," for example; they only say that a preamp should (and can, if care is taken in its design and use) deliver a signal that is essentially just an amplified copy of whatever you feed into it. There is always some minimum amount of noise that the laws of physics require, but apart from that, the output ought to be sonically indistinguishable in character from the input.
The thought behind this viewpoint is that there's no such thing as a "universal sonic improver"--no tweak that you can do to any audio signal, that will always make it sound better no matter what that signal was like in the first place. For every such tweak that may be built into component X (say, a mild boost in the low-mid frequency region for "warmth" and a gently increasing amount of low-order harmonic distortion to simulate "vintage tube sound"), there will be some recording that already has too much of the same thing, where any further addition will only make the result sound wrong. The second group of people prefers to record as "straight" as possible, and if there's an improvement to be made by boosting this or shifting that or reducing some other thing, you make it after the live recording is safely in the can.
There are other important differences in viewpoint between these two groups, but the more I go on about this, the more I risk stereotyping people (no pun unintended). And there's already way too much of that. Each person has his own reasons for his own opinions, but a surprising amount of this difference is over which beliefs are opinions vs. which are proven facts. That leads sometimes to the type of discussion in which people talk past each other and secretly--or sometimes not so secretly--think that each other's point of view is foolish. It's not pleasant to be anywhere near that kind of situation.
Anyway, my answer to the original question in this thread would resemble some that were already posted. 16 bits gives you a huge dynamic range to begin with, and even though no real-world recording ever has 24 full bits of resolution, the available 20 or 21 bits gives you so much that you can really afford to relax about levels. Just get the peaks somewhere into the top, I dunno, maybe 10 dB below full scale, and then you can normalize and dither down to 16 bits at your leisure when you get the recording home.
As long as your other components are well chosen and properly connected, and your gain settings make sense, with 24-bit recording there's no sonic penalty for having moderate rather than "maximum possible" peak levels. It even makes some sense to aim for them on purpose. If you're not sure what the peak sound levels will be, you can afford to set everything 6 dB too low, if that's how it should turn out. Doing so should greatly reduce the number of times that something accidentally lights your "OVER" light.
--best regards