OK my terminology was in correct...they are called proxy files. A buddy of mine who has done work for NBC says he uses this process and many pro's do because you need crazy fast multiple processors to do real time HD editing.
The process would be to trim the video, then create a rendered version in wmv. avi or whatever. I believe that Vegas Pro allows this on the fly. You use those rendered down files to do the editing with those files, once you doing editing, you replace the proxy files with the original files then render. This process allows editing to go quicker, and you can render at night, and when you wake up in the morning the rendering is done...
This is basically my flow of sorts. Except that all of my editing is done on the linux command line. Bash + ffmpeg. The only gui of sorts is using mplayer to watch the playable videos and monitoring the timer on the xterm to notate the needed edit point(s). I could use cinelerra, lives, kdenlive, and others, but I don't. I doubt my system meets the minimum specs. And since my process is scripted, I can change the X and Y resolution to anything and it'll get applied to the 1st generation camcorder input. Normally I leave it on 720p, because my computers are not fast enough, my hard drives not big enough, and other most folks watching it are only going to view the 360p version. And in reality, once you factor in actual lines of resolution of the camcorder, it's not really much more than a 720p camcorder anyway. But it's scripted so I can launch that and go fishing, or sleep. For edited video, I have to basically do it twice. Once to watch it and determine the edits. And a final encode at any resolution. But there's different scripts and parameters to make the first gen fast-ish to watch it for content. And the final slow-ish for max quality.
I've got my process setup this way so that I can extract the frames as images and do additional processing on them. With various image manipulation programs and even a custom green screen code (modified ppmchange). Generally I run the frames through batch-lab-colorboost for gimp. There's just something about that which enhances the perceived resolution / sharpness of the images. Which kind of makes your $300 camcorder look like a $1K+ camcorder. But it's slow as molasses so I don't really do that on everything, or even most things. Only one of my current youtube videos has that process done on it.
Mexican Eagle eats a Rabbit:
http://www.youtube.com/watch?v=ocuf-8peRqU(batch-lab-colorboost variant / changed for ppm files)
In the end it's ffmpeg that decodes and encodes the result. Except for my DVD variant that uses mjpegtools and mpeg2enc + mplex. Something to do with dvdauthor that made me go that route. Namely single image menu items and ogle quirks. But since I'm generally treating each frame of video as a picture / image, I can literally do anything to the images. Use the images as a skin to a povray scene. Overlay, fade, blend, whatever an image manipulation program can do. If you've got the time and HDD space. And coding skills to add anything ultra-custom-ish. In theory it also allows me to use my image, without using my image. As in the alpha mask from the green screen, but my actual image can be mangled. Gimp's alien-map or predator plugins, or just the photocopy plugin to make it more cartoon-ish, without having to know how to draw. It's a bit too hands on to be a viable editor or editing method. But it has it's uses.
http://www.youtube.com/watch?v=ZDLNEZ_dOiU(povray scene with video frames as a skin)
I pretty much always use external mics and a field recorder though. I'm more of an audio guy with the necessary evil known as video to supplement that. Although I like that I get to combine all of my skill sets into one medium. Even though writing code is a bit of a lost art form given that you can buy most of that functionality off the shelf in a lot of cases.