does anyone know anything about how you can engineer your own 'spacing' of the various sounds on a recording using pc software?
not sure what i mean?
explanation: when you listen to a professionally produced recording, different sounds appear to come from different locations; eg. the vocals come from the centre, guitars are spaced further out etc. and every different sound appears to come from a slightly different location.
is it possible to do this to a concert recorded live with only two microphones? obviously it wouldn't be as good as something done professionally but, since different instruments occupy different frequency bands (with overlap of course) then why shouldn't we be able to separate them into different 'virtual locations'?
it wouldn't be very hard, for example, to isolate the vocals, by frequency, and shift them to the centre of the sound image (of course you could do this by splitting it up into multi tracks, mixing the left and right vocals into a 'close' track and then overlaying them onto the rest of the music, but it'd make sense to do this using a specific function.)
my theory is that if you could assign different frequency ranges to different spacial locations, not sharply but with some overlap, then you could achieve something closer to the quality of professional recordings.
does anyone understand what i'm on about?
does anyone know of consumer (or easy-to-get) software that will do this?
thanks in advance