OK, I think I may understand your concern. "Lossy" compression requires low-pass filtering of some kind; the designers of a codec choose a high-frequency limit that seems appropriate to them. In general it makes sense to filter at 20 kHz (or perhaps a little bit higher if you want to look better on a spec sheet), since any signal energy at higher frequencies will contribute nothing whatsoever that is audible to humans. If the frequency limit is set higher than necessary, the additional signal energy simply wastes space in the channel.
Now, poorly implemented low-pass filters can produce audible side effects, so it's important to use good ones, even though they involve more processing and increased signal latency. The absence of (inaudible) frequencies above 20 kHz does not, in itself, indicate a problem, however. Ultimately, whether a given codec sounds good or bad can only be answered by careful listening.
I believe you when you say that you sometimes get something above 20 kHz in your recordings. But if you are doing the usual kind of distant or semi-distant recording through an amplified system in a public venue, whatever you pick up above about 10 to 12 kHz is mostly noise and distortion. Typically we preserve that garbage because it was part of the actual event and trying to remove it could cause more problems than would be solved. Most people don't perceive the high-frequency garbage as a problem anyway; they're used to hearing it (or to not hearing it, if they've lost those frequencies already, as is quite common).
But significant signal energy above the human audible range is a rather different matter. In no case can humans hear it, but a lot of audio equipment isn't designed to handle it, so audible distortion can be caused by its presence. Unfortunately this distortion causes some people to conclude, incorrectly, that they're hearing signals which actually are completely inaudible to them. (Even some studies published in the AES Journal have suffered from this error: When test subjects hear a difference between playbacks that include energy above 20 kHz vs. playbacks in which that signal energy has been filtered out, someone needs to capture and analyze the sound that the listeners are hearing, to find out whether it contains significant distortion artifacts below 20 kHz or not.)
In the case that you show, there is an enormous rolloff which is probably in the program material itself. I say that because it is already quite steep in the midrange. By 20 kHz, there would have been nothing significant to filter out anyway. I see no point in mourning for what isn't there, especially when we couldn't have heard it if it had been there. The part of a recording to be concerned about is the part that playback systems can reproduce, and that human beings can hear.
--best regards