No difference using a "stereo balance" control on a two-channel stereo file, verses splitting the file into two mono-files and adjusting the level on one relative to the other, then re-combining them into a stereo file again.
"Pan" typically refers to panoramic placement of a monophonic across a two channel stereo bus. The monophonic source is mult'd (sent to both channels) and the relative levels of the two are adjusted using the pan control. "Stereo balance" is different in that it adjusts the relative levels of two "separate-but-associated" sources (the left and right channels of the stereo source) instead of a single monophonic source.
Modern DAWs offer a choice of various "panning-laws", which varies the shape of the attenuation curves and the levels of each channel where they cross in the center as the source is "faded" from left to right or vice-versa. That shouldn't matter with regards to stereo balance.
Some unusual and uncommon panning-laws introduce a slight delay in addition to a pure level change as the mono-phonic source is panned across the image. Such a panning law will introduce a frequency dependant phase difference between channels which may present mono-compatibility issues or phase-interactions if the stereo source is recombined to mono again. They are analogous to recording using near-spaced microphone configurations verses coincident microphone configs. But those panning laws are not common, you probably won't come across them unless you try to find them, and they address monophonic source panning, rather than stereo-balance anyway. I mention them only as a technical disclaimer to explain that panning in certain unusual setups can introduce phase differences, but normally does not.
I always adjust the stereo balance of all my recordings by ear for best subjective effect anyway, so a close sensitivity match between mics is nice to have, but not a deal-breaker. But if the frequency responses between mics are not well matched, that is a deal-breaker since it is far more difficult to correct.