Church-Audio, you write as if there were general agreement that higher sampling rates "sound better." I know a lot of people in professional audio, and what I find instead is that people who believe as you do about "dissection" (a very good word for the theory which you follow) expect a higher sampling rate to bring increased waveform fidelity, and this expectation influences what they perceive.
If you were to make fair, direct comparisons when you didn't know in advance what sampling rate you were listening to, I think you would find that it is nearly impossible to hear differences except with specially concocted test signals--and then (for most people) only with some ear training in advance. Those test signals have to be oriented to the specific characteristics of the particular filters used in the A/Ds and D/As; music and speech might hit those specific bit patterns once in a gazillion years, or they might not.
I could tell you quite a few stories about situations in which very prominent classical producers and engineers--guys who think they can tell when silver vs. copper wire is used in the light bulbs over their heads--didn't realize that they were monitoring through absolutely conventional 16-bit 44.1 kHz A/D and D/A converters. Under real-world recording conditions when you don't set yourself up to hear what you expect to hear, the better ones can be essentially audibly transparent on program material nearly all the time.
Finally, the expectations to which I refer are based on a very basic misunderstanding about digital audio, since the sampling and reconstruction process in fact does not dissect audio waveforms in the way that you seem to mean. All the in-between moments of the varying analog signal are encompassed--not just the discrete moments at which samples are recorded. That can't be seen in the visual model which most people have of the process, which shows the effects of sampling, but not the effects of the analog signal's subsequent reconstruction. However, it can be shown both mathematically and empirically (with analog oscilloscope traces and through unbiased "blind" listening experiments) that it is so.
If it were not so, then the signal-to-noise ratio of the CD medium would be worse than that of an analog cassette. Think about it: If the system really worked the way you imagine, there would be no signal energy during playback in the moments between the samples, and the power in the "pulses" or "spikes" would need to be distributed among the much greater expanses of time in which there was no signal energy, like an "area under the curve" problem. That would greatly diminish the dynamic range of the system; if the pulses were (say) 1% as wide as the intervals between them, 40 dB of dynamic range would be lost and so on.
You could say that the extent to which a PCM recording system can approach its theoretical maximum signal-to-noise ratio is the extent to which its recorded samples actually cover the entire sampling interval which they each describe. Since the actual limit is usually set by the associated analog electronics, that should tell you something pretty important about your theory.
--best regards