What's the best way to ensure the color balance is "synced" between the 3 cameras the next time I go to do another multi-camera shoot?
The short answer is, you can't. DSLRs and prosumer video cameras don't in general expose their internal color matrix to you so that you can manipulate it. The lone exception I can think of is the new Pannasonic AJ-PX270. You could match a handful of these to broadcast quality, which is indeed what they are for. You can make these match just about anything, including the Sony and the Canon "looks" and of course other Panasonic cameras as well. But the DSLRs you have are going to have the look that's baked into them. You can only change them in post. Sad but true -- and it's actually a selling point -- the Canon look vs. the Sony look and all that.
Best you can do with your motley collection of cameras is perform a manual white balance. But even then, you are fully at the mercy of the lighting at the venue. If you look at the blue lights across the top back of the stage you'll see that they are probably (but who really knows) LEDs. A serious curse to video. Ain't no way in heck you can white balance to something like that. Just be glad they weren't dimming them down, which can cause all kinds of nasty video artifacts in your footage as your 30fps video deals with LEDs running a duty cycle of 17 Hz. I wouldn't wish that kind of banding on anyone.
All that said, your white balance looks pretty good. A testament to the skill of the engineers who designed and built the cameras. And to the fact that the venue was using lights with enough varied spectrum content that the cameras' white balance algorithms had enough light to work with and made the "correct" decisions.
If anything, I suspect what's bothering you is exposure. The only way around that is manual exposure, and the way to do that reliably is to use a production monitor that can show you a waveform monitor view. Put your skin tones in the range of 40-80 IRE (depending on how dark or light the skin tones actually are, etc.), then match to the other cameras so that they'll show the same thing. Which is good until they change the lights, and since many houses bring up the lights after each song... You and your people have to stay alert. If your monitors have zebras it can make life a little easier for the operators -- you can say set your zebras at 75 IRE and keep the exposure set so that you always see just a little striping on the face (typically a highlight like a cheekbone, forehead, etc.).
But heck, from where I'm sitting you've got nothing to complain about. It looks great. You could tweak it in post to make it better, but you don't have a compelling need to. People will forgive all kinds of sins with video, but low quality audio they won't stand for. As anyone on this board well knows.
I used premiere for these two videos, is there a way to adjust it there? Rendering HD video on my current computer is *incredibly slow*, so I didn't want to just play around w/ filters to see what happened. I really need a new computer, but can't afford the investment right now for the limited video editing I do.
My typical workflow is to use a luma curve and a three way color corrector on each camera for basic color correction. These don't add a great deal of processing load to Premiere Pro's workflow, and save you from having to use AE for this which adds a huge amount (since adding an AE process to a PPro edit effectively idles all your processor cores but one, and this has been discussed to death on the Adobe forums a couple of years back, so that's all I'm sayin'). If you can keep it all in PPro, all the better.
Anyway, once you've done the basic color correction, run it through the multi-cam editor and pick your angles. Then you may want to lay another three way color corrector over the final timeline to give it a look that you like (this is technically called "color grading").
I'm not actually explaining this for you OP, but for anyone who comes along later and finds this thread. Based on the clips you've showed us, I suspect you already know most of this.
BTW, one way to lower the processing load is to transcode your footage out of AVCHD. Sort of ruins one of the reasons to use PPro in my book, but it does take a good 4+ core machine to run a single stream of AVCHD cleanly, and you'll need at least 8 cores and speed and memory to do a good job with three cameras all running AVCHD. I'm just saying that transcoding might save you some pain in the short term, but long term you're likely going to be better served by power and memory.