I'm not sure whether you're talking about amateur recording of amplified events or about recording in general--but in either case there are plenty of different reasons and they all pile up. In my experience, clear live recordings with no sense of distortion are by far the exception rather than the rule.
First of all I'd have to say that there's a big market for certain kinds of distortion. Some of the comments in this thread give evidence of that. At least in some forms, distortion is part of what quite a few people seem to want to hear. I could even say that I'm part of that camp myself because there are some recordings that I think just sound really good, subjectively, even though I know that if I had been there in the hall when they were being recorded, that isn't at all what I would have heard; the recordings grab me more, are more involving, than the real sound would have been in the room at the time.
But going back to distortion as a defect: There are many venues in which the sound loses clarity as it gets louder, and performances where driving the room to and beyond that point is very much a part of the esthetic intention. Again, when you're in the room, the interaction between volume and distortion makes organic sense, while if you weren't there but you're hearing the effect accurately rendered on a recording, it doesn't necessarily make as much sense nor feel as good to hear--unless you can imagine what the experience in the space itself might have been like AND you happen to like that effect. Which evidently you don't like, and usually neither do I although sometimes, yeah, sure.
From a strictly technical viewpoint with recording equipment, it's very rare that sound alone (as opposed to wind or solid-borne vibrations a/k/a shock) will overload a modern condenser microphone or even push it into its non-linear region, except for some microphones of the past generation or so that have been designed (as a kind of "retro" thing) to produce audible, gradual distortion as a "tone thing" at levels that performers actually tend to reach, or some who pay the $$$$ to rent or buy actual "vintage" microphones (or lower-cost recreations of them) that didn't have such high overload levels to begin with. It's certainly possible to generate 120 dB SPL at close range by screaming, or by close-miking a very loud amp, and that's close to the limit for many older microphones that had output transformers and/or vacuum tube circuitry. The Beatles are probably the most famous example of people who took high-quality, professional microphones at Abbey Road Studios and elsewhere (Neumann, AKG) and deliberately pushed them past their design specifications to produce effects. The key thing, though, is that they listened to the results and kept pushing in the directions that produced interesting results, rather than just randomly abusing stuff and calling it art.
And as has been mentioned, all analog tape has gradually increasing distortion as you get higher in level (the effect is frequency-dependent as well). The esthetic that likes to push analog tape to the point where this becomes clearly audible as a kind of compression and distortion effect is very widely established in some genres of music, and some such people seem convinced that everyone else agrees with them, and that they're basically doing God's work. They often seem to get off on how much they're "breaking the rules"--they're rebels. The thing is, it's not easy to design electronics to drive the record heads of a tape recorder to extremely high level. Ironically the problem is to keep distortion from occurring in those electronics, to let them stand back and let the tape do all the distorting. But over the years as newer tape formulations came along that could take higher and higher levels, older decks and older circuit designs often found themselves out of headroom. Pushing the tape levels to extremes doesn't work equally well on all recorders by far. There are recordings in which the "tape squash" effect has backfired.
I was an engineer at a major classical record label during the years of transition to digital recording and CD production. The master tapes used for vinyl record production were copies of the approved, equalized and limited/compressed copies of the copies of the original session or event ("job") tapes had been cut together. That's what a "master tape" is -- not the original by far, but a copy of a copy of a copy at the very least. Analog tape copying, even with Dolby professional noise reduction and reasonable maximum levels on the tape, is not a distortion-free process and to be honest, those master tapes sounded pretty bad to my ears especially in terms of distortion. But they were good for what they were being used for, and they could be replaced, when they wore out as they inevitably did, by copying another copy off of the authorized copy. The thing was, they were totally unsuitable for CD mastering in which the end user hears (with only minor variations) just what was on those tapes. No wonder that when people used to compare early CDs with the LPs that were issued from the same masters, they thought the records sounded better. It wasn't so much the shortcomings of the CD process, but much more the shortcomings of LP production and playback that everyone had spent decades learning to work around or adapt to.
Microphones are a whole other layer of this. Inexpensive, consumer-oriented recording equipment used to have inputs designed for inexpensive, consumer-oriented dynamic (moving-coil) microphones. When people caught on that those microphones were the sound quality bottleneck and they started buying semi-pro or pro-quality microphones, the inputs of their recorders might be overloaded by the much higher sensitivity of the better microphones. Even as recently as the Sony PCM-F1 in the 1980s had this problem: a recorder that was miles beyond any cassette deck or even most studio open-reel decks, but unbalanced mike inputs with no phantom powering and an overload limit that nearly any professional condenser microphone could exceed fairly easily. The overload couldn't be prevented by turning down the record level controls, since it occurred in the very first stage of amplification/signal conditioning within the preamp, before any level controls had any effect on the circuit.
I hope that gives you some ideas to work with.