The best way I can explain what is going on is this.
Normalization takes the highest peek of audio and brings it up to about 1 db before absolute zero. So if we have an audio track and it is 10 db below zero the program will use rms to measure, the overall level changes and apply +9 db to the over all mix, to get it up to -1db. The rms part is only used to scan the audio track and determine max and min levels, so it knows 100% for sure how much gain it can add to the track before distortion.
The problem most people get into is when they try and get +30 db of gain out of a track via normalization. That is a job for gain, Gain can be added say +15 db depending on noise floor of the recording, and then normalizing can be done to the rest of the recording.
The only problem with normalizing is overuse, and no all programs do it the same way and with the same quality. I do not know what algorithm's are used per say, but I know there is a huge difference between say Nuendo and say early versions of Sound forge. The better the programs software the better the normalization is.
Real mastering studios do not use normalization very often on the tracks they work on, they use gain via a really nice recording consoles, and very good quality compression to pump things up. And we are not talking plugins here they also use EQ and play around with the Fletcher Munson curve until the audio screams. Well hopefully screams, Sometimes even mastering engineers go too far squashing the shit out of audio for the sake of a few more db and maybe be better noticed on the radio. I think a less is better approach is the best way to go.
Hey Chris.
I'm still confused! Does the RMS method apply a uniform gain over the whole sample or does it vary it over time? OK, if it varies the gain, do you know what algorithm is used?
Thanks,
Richard
[/quote]