I promise this will get to an interesting and useful place after a moment.
Do "kids today" know how stereo FM radio works? FM broadcasting was introduced as a mono-only medium, and then stereo was "grafted on" as a retrofit. The way this was done is very similar in principle to M/S. To simplify somewhat: At the radio station, the left and right program channels are fed into a matrix. One output of the matrix is L+R (the "sum"), while the other output is L-R (the "difference"). The "sum" signal would modulate the station's carrier frequency directly if it was a mono station--as they all were originally, and then there were several transitional years during which only some FM stations were stereo, while all others remained mono and only gradually converted.
If the station was broadcasting in stereo, the "difference" information would modulate a second, somewhat higher-frequency carrier, and the result would then become part of the signal that modulated the main carrier--all happening in real time, analog processing, so that it was really synchronized and simultaneous. Finally, a 19 kHz "stereo pilot" tone was added in at a low-ish but constant level (and possibly a secondary program channel which could be entirely unrelated if present; I'll leave that mess out, though).
A mono receiver simply demodulates the received signal, chops off everything above 15 kHz, and plays that back as mono; no problem.
A stereo receiver demodulates the signal, chops off everything above 15 kHz (which produces L+R a/k/a mono as above), but also checks for a 19 kHz stereo pilot signal. If it's present, the receiver knows to lock onto the secondary carrier (38 kHz above the baseband as I recall) and demodulate that as well, producing L-R a/k/a the difference channel. It then matrixes the two received signals together; (L+R) + (L-R) = 2L, while (L+R) - (L-R) = 2R; and voila, you have the original stereo signal back again.
As this illustrates, ANY two-channel signal--in fact anything that you can stuff into two channels, whether it's the left and right halves of the same program, or two completely unrelated signals (!)--can be matrixed into L+R (sum) and L-R (difference), then transmitted and/or recorded, and finally dematrixed back to the original signals (e.g. L and R) again on the receiving end. So this is certainly true for X/Y microphone signals. But in principle it doesn't even need to be two coincident microphones, or even microphones that are (or were) at the same concert at the same time!
However, if you do use this trick (matrixing / dematrixing) with a coincident (X/Y) pair of microphones that are at the same concert at the same time, then the nice thing about the L+R sum is that it is (if you've set up appropriately and the acoustics gods are with you) a listenable mono signal in case there's a use for that. Conversely, back in the 1950s when recording engineers were first learning to record stereo for FM broadcast and classical record production, they were all experienced as mono engineers (since that's all there was for many years), and M/S allowed them to continue using those skills while producing mono-compatible stereo recordings. This method was used very widely in Europe, while spaced-microphone stereo became the norm in the U.S. due to the influence of Bell Labs.
Back to our world today: Any X/Y recording may be translated (matrixed) to M/S "signal format" and vice versa without limit. You can derive the sum and difference from any X/Y recording, then (if you like) process the M or S channel separately, then recombine them to L/R stereo. I typically like to boost the low frequencies in the "S" channel to improve spaciousness; it doesn't really matter whether I'm starting from an X/Y or an M/S recording, although starting from M/S saves me a processing step.
--Dr. Noah said: > Speaking of mid/side, perhaps it's greatest advantage over x/y is that the angle can be changed in post.
X/Y and M/S are equivalent in principle--just different "encodings" of the same information. There's really nothing that you can do in one that you can't do in the other.
The technique that you're referring to involves changing the overall proportion of M signal to S signal. And when you do that, you not only change the stereo image width, you also change the amount of reverberation in the stereo recording. If you increase "S" in post, it's as if you've time-traveled back and spread your original X/Y microphones farther apart--but at the same time, altered their pickup pattern toward a greater degree of reverberation and a lower proportion of direct sound. You can't have one without the other in conventional M/S <-> X/Y.
In my experience this has been a real limitation; only one narrow range of M-to-S gain ratio settings yields a plausible degree of reverberation for the image width that you get with it. So you can sometimes use this approach to improve an X/Y recording up to a point--but it won't give you independent control over these two important parameters of the recording separately from one another. If you want fully independent control over the stereo image width AND the amount of reverberance in the recording, you need at least three microphones--either "double M/S" as Schoeps calls it (a regular M/S pair plus a separate, rear-facing capsule with its own recording and processing channel) or so-called "horizontal Ambisonics". (Or real Ambisonics, of course.)
--best regards