Lots of clocks. All ticking at the same nominal rate, yet in the real world, slightly different unless linked.
There are three clocks in the scenario above. The recorder's, the soundboard's, and the computer's on which the DAW software is running. Upon transfer and subsequent playback in the DAW, both sources will be played back at the computers rate. Neither of them will play back at precisely the same rate at which they were recorded on the original devices with their own clocks. But that difference is small enough that only the relative difference between the two sources is consequential, at least over the course of a long enough recording where the difference becomes apparent. So we pick one and stretch or shrink the other, slightly altering its playback rate, to match.
We could adjust both, shrinking one and stretching the other to "meet in the middle" but that's twice as much work and doesn't make any significant difference. Practically, when deciding which to alter, it might be best to choose whatever you deem to be the "secondary source", the one that is contributing less to the end result, thereby eliminating the chance of introducing artifacts into the primary source, but I probably doesn't matter as most routines for stretching/shrinking are essentially audibly transparent these days.
The accuracy of clocks in relatively inexpensive gear has generally improved over time, so this problem tends to be less egregious than it used to be, but the fundamental relationship remains whenever mixing sources of significant length that were recorded using separate clocks.
Interesting side note- If you were to playback the file recorded on the soundboard from the soundboard, while simultaneously playing back the file recorded on the mix-pre from the mix-pre, and were able to sync them up sufficiently at the start, they would stay in sync (close enough for practical purposes) through the length of the entire recording without stretching or shrinking one or the other. In that case both files are being played back using the same clock with which they were recorded, which mostly negates the differential in clock rate between them.
Real world example- I used to record four channels using two Edirol R09s, each capturing two channels. In the DAW, the two files sets required initial time alignment of course, but also the stretching/shrinking of one to match the other over the course of a long recording. But I'd also play the files directly from the recorders themselves, with both playing simultaneously, which required getting the initial alignment correct by ear via short double jabs to the play/pause button of whichever was slightly ahead of the other. That in itself was good ear-training. But more on point, once aligned in that way, playback from both recorders would remain in sync for the entire length of the recording, because each recorder was playing back at the same rate at which it originally recorded, even though that rate was slightly different for each recorder. The difference between the two canceled out by each using the same clock for playback that was initially used for recording.
To really confirm and illustrate this to myself, once after doing that and playing the files in good sync all the way through, I swapped SD cards between the two recorders and tried it again. Now each recorder was playing back the file that was recorded by the other. This served to aggravate the slight rate difference between the clocks rather than eliminate it. Sure enough, rather than staying in sync all the way through, the files fell out of sync relatively quickly. In fact, they did so twice as quickly as when both were transferred to and played back in the DAW, using a common playback clock for both files.