I've never tried it that way -- just always been curious about it in situations where the speed difference is so extremely small that correcting the pitch isn't a real concern. Pitch is the only concern I can think of -- I certainly don't think any of us could notice if 1 sample is removed. Anything else to worry about?
If two different clocks yield recordings of two different lengths, they will have correspondingly different pitch. Most likely, it will be too subtle to be heard with the ears though it could result in some phase shift, or beat frequency. Personally, if two signals are not within 15-20ms, I hear slap-back (delay), but below that it's pretty subtle. Unfortunately, unless the difference is microscopic, there is likely to be some phase shift audible, and that can definitely screw with bass definition and solidity
I think we are addressing a situation where there are 3 clocks involved.
1 - recorder A
2 - recorder B
3 - playback device
If there is any difference between 1 and 2 - it will result in a different pitch when played on 3 - correct?
I wonder we were able to use two discrete clocks during playback - would the sources be the same pitch?
Is there anyway to determine a given recording's "absolute" sample rate? (rather than the generic "44.1")
I think we can ignore clock #3 for our calculations. Once you set clocks 1 & 2 to remove drift, the only people who would notice the difference to clock 3 are those with micro-fine perfect pitch.
But you bring up an interesting point regarding two clocks. If you transferred your recordings into the computer via analog, then each recorder would play back its content at the proper relative clock speed (assuming temperature and other environmental factors are close enough to the same). This could theoretically eliminate the need to resample by calculation, but the drawback is that BOTH sources get resampled on transfer for this to occur.
If you wish to determine the exact drift for two devices, you can make a test signal. I built mine in audacity at 48kHz. I made 10 seconds of triangle wave at 480 Hz, then generated one hour of silence (172,800,000 samples) and followed that by 10 seconds of triangle wave at 480Hz. Once I had the file, I loaded it into one recorder, and hit play, while recording on the second deck. After the hour was complete, I loaded the new recording into the computer, and lined them up. It was very easy to tell the exact number of samples of drift, but it is not a clean ratio, by any "stretch" haha. My two main decks drift about 4.29 samples per second, which is over 15,000 samples per hour.
My next test should be to run this same signal again at different temperature levels, and see if it's close or how far off it gets with temp change.
Let's assume for a moment that this whole stretching business is just bad engineering and that the only true way to synch sources is by cutting and pasting. In order to cleanly remove drift, you'd have to cut and paste and adjust very small sections. In fact, as the section length decreases, your accuracy increases. If I recall the concept of calculus, this means that you may as well just make the section length (epsilon?) as small as possible, which just brings us back to the resample algorithm we're all discussing!!! Did I say that right? In plain English - I disagree with the notion that stretch/squash is less accurate than a chop-job. Exactly the reverse, in my opinion.