raymonda, "resolution" has no generally accepted definition, and is rarely used by audio engineers unless they're engaged in marketing.
My own definition, if I used this term (which I generally avoid doing, because it seems irresponsible) might be based on the audible difference between the input to a process (such as digital recording) and the output of that same process. If that difference--the inaccuracy or error of the process--is difficult or impossible to hear, the resolution is high. If the difference is easily audible, the resolution is low. If the difference is zero, the process is perfect, and please have your engineer give me a call!
The notion of the error isn't just theoretical, since you can derive an actual error signal by subtracting the input from the output; you can then listen to this signal, measure it in various ways, and try to relate the kinds and amounts of the error to its audibility. Its amplitude isn't the only thing that matters; its relationship to the original signal is important as well, and this is difficult to include in any notion of resolution that is merely one-dimensional (high vs. low).
For example, in an undithered (truncated) digital recording that doesn't have enough bits, the error signal's amplitude will vary in response to changes in level in the program material, and you can hear distorted but recognizable parts of the program material, that are either no longer in the recorded signal, and/or their inverse is present the recording as a form of distortion. That's what sounds so wrong in undithered digital recordings that lack sufficient bit depth. But in a properly dithered digital recording, regardless of bit depth, the error signal consists entirely of random noise. It varies in level somewhat as random noise must do--but it does so "on its own"; there's no correlation between the momentary noise level and anything that's going on in the program material at that time.
Proper dithering lets you hear "down into the noise" and perceive actual details of the sound that the recording has preserved. It gives digital recordings the exact same resolution as a good analog recording with the same signal-to-noise ratio would have. There's no more "stairstep distortion" and the only loss of audible signal components that occurs is due to masking by noise--exactly as with analog recording. The noise that exists in all signals limits the "resolution" of any recording that you can possibly make of them. If your recording system allows the noise of the incoming signal to predominate, to the point where you can't humanly distinguish whether the recording system's noise is there or not, then that's as high-resolution a recording as you can possibly make, analog OR digital, of that signal.
Beyond that point, where digital recordings are concerned, adding more bits doesn't improve the sound any. But a lot of people don't realize this, so they're willing to pay extra for 24/96 transfers of original recordings that have considerably less than 16-bit content to begin with. Insert your favorite pun here regarding the invisible "hand" of the market and what kind of "job" it is performing for those people ...
--best regards