If a particular DAC does a better real-world job at one sampling rate versus another, does that mean we should attribute the difference to the sampling rates or to the DAC itself?
It's a very complex chain of causality. I'm not trying to be argumentative, only warning against reaching a conclusion without considering the full complexity of the problem.
Edit- ..and I suspect that it is "something other than the playback sample rate itself" that is responsible for the very real differences you are hearing.
Bingo! Looking at just sample rate ignores a ton of other aspects, and end user playback-only is totally different from professional processing needs.
My studio experience with the converters I had was that my previous type made cymbals and acoustic instruments sound very indistinct at 44K1/48, with cymbals particularly trashy. 2005 tech. Sounded fine at higher rates. My newer converters (2014 tech) don't care so much, they still sound better at 88K2 than 48, but I have to work to hear it. We're talking about the same sample rates, and different equipment. There was one long standard in studios that seemed to sound best at 44K1, if I recall the conversations correctly.
Apple devices, Quicktime is running everything in the background in iTunes and converting various rates on the fly to whatever you have your master clock set for. It's not changing rates based on native file rate. True for many home music servers too. Experiment with listening to a track at it's native rate, and then at others. See if you hear any difference, and if you prefer one. If you prefer a high master rate on a file with a low native rate, then it most likely means that higher rate sounds better on that particular equipment. Or vice versa.
Take a high rate file and downconvert it. Compare. Pick something like pretty clean acoustic music. If you pick a loud rock band track, you may like the lower rate, as some people feel more sense of presence with that, and it's preferable on something that's supposed to kick you in the stomach anyway.
And generally, again, it's not about frequency response in adult humans. Phase, intermodulation, and stereo image cue timing at high frequencies are more what it's about. Intermod, you probably won't hear at all on a rock band track, but you sure as hell will on a solo grand piano track if there's enough of it. Phase distortion in the top will change the perception of harmonics on acoustic stringed instruments, something like a string quartet can be most revealing. An old rock band tape may sound more or less forward in the top end from phase distortion, and be more of an 'eh' difference.