I realize if clocks aren't synced there will be difference, but with modern digital recorders it should be a consistant linear difference, right? I.e. if one records a little faster than the other, chances are this will be consistant over the course of an evening?
Let's say I take two sources, then pick a particular drum beat for the beginning, and another one an hour later, and they are seperate by 2.13 seconds, then mathematically, I should be able to dither it by a factor of 2.13/3600 = 0.0005916666, and apply that to the whole file using accurate software, right? Unless you have a supercomputer, this will probably run very slowly (think over night). But when I get done, I should have a mixable result, no? I haven't done it, but that's my plan for when I do.
Now, the real heros are the people to mix 1970's vault tapes with lineage like MSR > Reel(2) > DAT > WAV against a MAC > DAT > CDR > EAC > WAV, because there you have 2 fluctuating analog signals, plus several other clocks. That would be no where near as consistant as modern digital recordings. You read stories about dan@amdig doing that for many many hours.