Yes, bigger is better, but when we get past the hearing limits of humans (20000 Hz for youngsters, maybe 16kHz for me...) the improvement can not be heard (sampling rates over 44.1 kHz). Some argue that music has overtones which cause lower frequency interference signals and it is true, but by recording up to the hearing limits we do catch all those signals we can hear! Using 96kHz might, might be usefull with heavy editing, specially with slow down effects. 24 bits is usefull from levels setting point of view, but for the final product 16 bits is good enough. Like I said before, hardly anybody has listening systems or spaces to utilize even that. It is not the 16 bits that set the limits, analog systems, microphones and loudspeaker/room systems are the bottlenecks. It just so much easier and cheaper to use 24 bits and "hear" the difference than use $100000 to refurbish the living room to state of the art studio level.
----------
One more observation about the bit depth and dynamic range/"resolution" connection. The A/D converter is a linear device; double the voltage = double the sample value, which simply means one more bit in binary system. That gives the 6 dB/bit result. If we would like to use this 16 -> 24 bit depth improvement for more "accuracy" or something, we would have to compress the analog signal before digitizing, then expand it in analog domain again. This would cause much more damage to the waveform than just digitizing it raw like we do.
And besides, in double blind tests people have not been able to discern between original analog and 16/44.1 signal, which pretty much proves it is good enough for final output. Then again, hard disk space is cheap and if 24/96kHz makes people happy, there is no damage done. Just do not rationalize it to me with the wrong arguments.