What I'm still trying to figure out is how to properly resample from 24 bit to 16 bit. I've read a lot about this and still haven't figured out how best to do that. For now I'm using UV22HR dither simply because it's built into CuBase which I use for editing. I've read all the arguments, some say don't dither, because it adds noise etc... But, just chopping those bits off doesn't sit well with me either.
You are not actually resampling when you make that conversion. Resampling is changing the sampling rate- converting from 48kHz to 44.1kHz for example. Resampling is more computationally complex than changing bit rate (dithering and truncating), and IMO there is a stronger argument for using a high-quality resampling routine to do so (or better, recording at the target rate to begin with) than there is for the use of noise-shaped dithers like UV22HR or whatever when reducing bit length, especially given the noise floor of the material most live music tapers are recording.
Here's my take on changing bit length-
tldr- Any time you change from a higher bit rate to a lower one, apply dither as part of the truncation process. Both are done as essentially a single step in your editor.
Dither is essentially very quiet randomized (uncorrelated) noise. Don't let the term "noise" scare you. This is very, very quiet noise which you can only hear as hiss if you were to crank the volume to ungodly levels which would otherwise destroy your system and eardrums when normal level program material plays. It only affects the least-significant bit. By applying it you trade one type of noise for another. You eliminate one type of very unmusical, nasty sounding, extremely quiet digital artifact noise for a far less annoying, but also extremely quiet analog hiss. In addition, dither allows super, super quiet fading sounds to still be heard as they sink beneath that analog like hiss noise-floor, rather than suddenly dropping off into digital silence (and creating that nasty digital artifact noise as they do so). But again, you'd only actually hear this if you were to crank up the volume to ungodly levels.
Whenever you reduce the bit-length of the file you raise its lowest achievable (in our case theoretical) noise floor. But the lowest achievable noise floor of the file is almost never the actual noise-floor of any of our live recordings. Instead they are always dominated by other forms of noise at higher levels. If your recording gain is set to anything reasonable, the self-noise of the microphones will always be higher than the mathematical lower limit of the file. Self-noise of microphones is basically random hiss, which essentially serves as dither in its own right. So even if you chose not to dither when shortening the bit length, your file is still probably dithered with plenty of mic-noise at the bottom. So why dither? Because it can't hurt and does no harm. It's good practice and takes no more effort than truncating without dither.
And for the vast majority of live-music, the noise floor of the recording is dominated not by mic self-noise but by the acoustic noise floor of the environment in which the recording was made. This noise-floor is likely to be 30dB or more above the lower limit of even a 16 bit file. Dither noise is way, way quieter and will never be heard in these live recordings even if you crank it up. The HVAC noise of the room and people breathing and fidgeting in even super quiet classical venue recordings will dominate. Fugetaboutit with anything amplified through a PA.
I use standard "triangular" dither, which should be an option in any editor. Noise shaped dithers like UV22HR are fine and won't hurt for what we are doing, but also probably make no difference, after all it's going to be buried deep beneath other noise. The idea behind noise shaped dithers is that the dither noise is not linear with frequency like standard dither noise but rather EQ'd so that there is more noise where the ear is less sensitive (low and very high frequencies) and less where the ear is most sensitive. Okay that's cool, and may arguable be useful with super-duper quiet recordings made under controlled conditions, but again, if its buried beneath higher level noise it won't matter.
Actually there is a technical argument against using noise-shaped dither for sources which may be mixed together again. That's because the specially shaped non-linear noise adds together and is then no longer the specially crafted shape anymore, it becomes over-emphasized. Best practice would be to do all the mixing at higher bit lengths and dither/truncate as a last step. Next best is probably to use standard dither for each source if saving them separately at a lower bit length prior to mixing, then maybe choose noise-shaped dither if one wanted to as the final step after they've been mixed if reducing the bit-rate still further. But this is all gilding the lily and seems to me a solution without a problem for post production of live music recordings. Noise shaped dithers like UV22HR, SBM, etc, arguably have more usefulness in a 24bit capable ADC's which is doing the bit reduction prior to recording a 16 bit file. The argument there is basically the same as that for recording in 24 bits instead of 16, providing a touch more leeway in setting recording levels safely between noise at the bottom and overs at the top, but not as much as recording at 24 bits to begin with.