Taperssection.com

Gear / Technical Help => Post-Processing, Computer / Streaming / Internet Devices & Related Activity => Topic started by: lsd2525 on February 04, 2017, 11:39:46 AM

Title: Normalization question - best practices
Post by: lsd2525 on February 04, 2017, 11:39:46 AM
OK. Interested in how people deal with this. While taping, first  five seconds way too high, dialed back  to good level. Now, I need to normalize. It's going to read 100 because of the hot levels during the first five seconds. Is there a way to just normalize the rest of the show without splitting the first five seconds into a separate wav? Back in the day that's what I would do and then put it back together with addawav. (which I don't have anymore:)

So, what's your take?
Title: Re: Normalization question - best practices
Post by: lsd2525 on February 04, 2017, 12:36:45 PM
On an unrelated note, if anyone can suggest a "drunken jackass" filter for the guy right next to me who yelled "Wowie Zowie" into my mic all night, even after Dweezil told him they would be playing the setlist, that would be gravy
Title: Re: Normalization question - best practices
Post by: Fatah Ruark (aka MIKE B) on February 04, 2017, 12:40:38 PM
You could use the Volume Envelope to reduce the first 5 seconds and then normalize it.

I think most audio editing software can do that. I haven't had to do it in a while luckily.
Title: Re: Normalization question - best practices
Post by: badronald on February 04, 2017, 01:35:36 PM
On an unrelated note, if anyone can suggest a "drunken jackass" filter for the guy right next to me who yelled "Wowie Zowie" into my mic all night, even after Dweezil told him they would be playing the setlist, that would be gravy

I'd duck in a 'PUNCH! THUD!' sound effect
Title: Re: Normalization question - best practices
Post by: Gordon on February 04, 2017, 02:00:31 PM
you could do what mike suggested or in wavelab for example you can put the cursor where you want to start and right click select till end of file and raise gain/normalize as needed.
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 04, 2017, 03:45:17 PM
You could use the Volume Envelope to reduce the first 5 seconds and then normalize it.


Yep, that.
Title: Re: Normalization question - best practices
Post by: morst on February 04, 2017, 04:52:56 PM
You could use the Volume Envelope to reduce the first 5 seconds and then normalize it.

I think most audio editing software can do that. I haven't had to do it in a while luckily.

you didn't go past zero did you?

I like where these guys are going, but I'll play devils advocate and say to raise the levels of everything AFTER the blowout up as high as you can before you normalize.

On an unrelated note, if anyone can suggest a "drunken jackass" filter for the guy right next to me who yelled "Wowie Zowie" into my mic all night, even after Dweezil told him they would be playing the setlist, that would be gravy
If you have access to enough valium to hand them out, those and a couple rounds of beers usually quiet folks down.
Title: Re: Normalization question - best practices
Post by: ilduclo on February 04, 2017, 06:38:44 PM
sometimes 5 seconds is a decent amount to fade in. I'd do the envelope thing first, from maybe 40% up to 100%, then fade in.  Upload that segment, maybe a minute as a wav file, we can see easier what might help
Title: Re: Normalization question - best practices
Post by: willndmb on February 05, 2017, 03:04:58 PM
I would cut the five seconds unless it was actually relevant. If so I would just up the rest to match the five secs level and go from there
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 05, 2017, 07:04:41 PM
I would cut the five seconds unless it was actually relevant. If so I would just up the rest to match the five secs level and go from there

If the first five seconds was hot enough to be clipping, though, then raising the levels of the rest will cause it to clip, too.

There is zero reason not to reduce the levels of the first five seconds, unless you're really worried about losing a few bits of information from a brief sample that is, let's not forget, already recorded too hot. If it really bothers you, increase the bit depth of the whole file first, then reduce the levels of the first five seconds. But I seriously doubt you'll be able to tell the difference.
Title: Re: Normalization question - best practices
Post by: justink on February 05, 2017, 07:07:01 PM
i'd select the entire wav.  compress the entire thing down (really you're just compressing the first five seconds) -6db and go from there.

hard to decide the best path without seeing what you're working with.
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 05, 2017, 08:33:40 PM
i'd select the entire wav.  compress the entire thing down (really you're just compressing the first five seconds) -6db and go from there.


Compression is what you *don't* want to start with here. Especially since that first five seconds is already going to be somewhat compressed if it's clipping.

Think about it this way: If you could go back in time with the knowledge that the first five seconds was recorded too loud, what would you do? Lower the gain on the first five seconds, right? The best way to simulate that is to reduce the volume on that section — no compression, just a straight volume reduction. It can't undo any clipping that occurred, but it's the best you're going to get.

Once you get there, work on equalization, compression, anything else you like. But I'd strongly recommend against doing any of that until you have a file with even-sounding levels across its whole duration.
Title: Re: Normalization question - best practices
Post by: voltronic on February 05, 2017, 09:25:57 PM
^ I agree with the advice with nulldogmas.  Getting the rest of the concert even-sounding as he said is your priority.

To that end: your "Wowie Zowie" problem may not be "unrelated".  If that guy is much louder than the music in your recording, it will prevent you from raising the level of the overall concert, and whatever amount you do raise it will make his outbursts all the more annoying.

The first step I do in any post work is to go through the entire concert and apply limiting to any places of loud audience noises, applause near mics, etc. so that the non-musical things are knocked down to the level of the music (or at least close to that).  I usually do this in Audacity using the Hard Limiter with the dB limit set by ear, and with the Residue Level set to 0.7 to soften the limiter.  I am very careful to do this such that the music itself is not being affected.  I usually need to experiment with the dB limit to get to this point without introducing clipping.
http://ttmanual.audacityteam.org/man/Hard_Limiter (http://ttmanual.audacityteam.org/man/Hard_Limiter)

Once the non-musical noises are knocked down, my next step is to select the entire recording, then use the Amplify affect to raise the max level to near 0 dBFS.  (I could use Normalize for this which is what I'm really doing, but I like to keep track of exactly how much I'm adjusting the level, and the Normalize effect doesn't allow you to see that.)

In your case, the complication is that first 5 seconds which will prevent the second step from happening.  If there's no music or anything significant there, I would definitely delete it.  If you need to keep it, I would do everything I've said above but do the gain raise for the entire concert selected except for those 5 seconds.  Then you could work on those 5 seconds separately, possibly doing a simple gain reduction to match the rest of the concert.
Title: Re: Normalization question - best practices
Post by: morst on February 05, 2017, 11:56:22 PM
Why not use the channel gain feature in Audacity? The slider on the left above the pan control, when the track is large enough to see it. I do use the amplify effect as a tool to tell me how much it suggests, but then I cancel it, and then I set the channel gain to a bit less than that amount. If you're going to be exporting anyhow, this takes less processing.

Once the non-musical noises are knocked down, my next step is to select the entire recording, then use the Amplify affect to raise the max level to near 0 dBFS.  (I could use Normalize for this which is what I'm really doing, but I like to keep track of exactly how much I'm adjusting the level, and the Normalize effect doesn't allow you to see that.)
Title: Re: Normalization question - best practices
Post by: lsd2525 on February 06, 2017, 10:34:44 AM
Thanks for the advice. I might try to normalize the section after the clip - which is basically the entire show after the first 5 seconds. Been so long since I've tried to do anything like this that I didn't know it was an option. I thought normalizing was an "all or nothing" proposition on a file.

The "fade in" was another great suggestion. Actually, that might be the perfect solution. If I can fade it in to the portion where I reduced the levels, then I should be able to normalize the entire wav.

Volt, the Wowie Zowie guy didn't bump the levels at all. Just incredibly annoying. It's just between songs. If I knew what I was doing, I would copy and paste the left channel over the right every time he yells just to minimize the jackassery lol.

One other question: The right channel is a couple db's less than the left. There is a way in audacity to normalize each channel independent from each other.......right?

Thanks again guys

 
Title: Re: Normalization question - best practices
Post by: Sloan Simpson on February 06, 2017, 02:09:53 PM
If I have annoying audience stuff between songs I just cut that couple seconds completely (unless it's to sync w video). Usually you can make it sound seamless.
Title: Re: Normalization question - best practices
Post by: hoppedup on February 06, 2017, 02:14:18 PM
Thanks for the advice. I might try to normalize the section after the clip - which is basically the entire show after the first 5 seconds. Been so long since I've tried to do anything like this that I didn't know it was an option. I thought normalizing was an "all or nothing" proposition on a file.

That's what I'd do

One other question: The right channel is a couple db's less than the left. There is a way in audacity to normalize each channel independent from each other.......right?

Thanks again guys

Yup. In the drop down menu next to your file name there is an option to "SPLIT STEREO TRACK"

Depending on the version you are using, there may be a checkbox for "Normalize stereo channels independently" when you use the normalize feature.
Title: Re: Normalization question - best practices
Post by: noahbickart on February 06, 2017, 03:04:18 PM
i'd select the entire wav.  compress the entire thing down (really you're just compressing the first five seconds) -6db and go from there.


Compression is what you *don't* want to start with here. Especially since that first five seconds is already going to be somewhat compressed if it's clipping.

Think about it this way: If you could go back in time with the knowledge that the first five seconds was recorded too loud, what would you do? Lower the gain on the first five seconds, right? The best way to simulate that is to reduce the volume on that section — no compression, just a straight volume reduction. It can't undo any clipping that occurred, but it's the best you're going to get.

Once you get there, work on equalization, compression, anything else you like. But I'd strongly recommend against doing any of that until you have a file with even-sounding levels across its whole duration.

I don't get it, if you set the threshold above the music you don't want to affect, you have done no harm. You've only compressed (and "turned down") the first five seconds.

Use metering to see where the highest peak was after your volume change and set that at the threshold. Use a relatively high ratio (8?) and use no make up gain. Then Normalize as usual.

Title: Re: Normalization question - best practices
Post by: voltronic on February 06, 2017, 07:08:45 PM
Why not use the channel gain feature in Audacity? The slider on the left above the pan control, when the track is large enough to see it. I do use the amplify effect as a tool to tell me how much it suggests, but then I cancel it, and then I set the channel gain to a bit less than that amount. If you're going to be exporting anyhow, this takes less processing.

Once the non-musical noises are knocked down, my next step is to select the entire recording, then use the Amplify affect to raise the max level to near 0 dBFS.  (I could use Normalize for this which is what I'm really doing, but I like to keep track of exactly how much I'm adjusting the level, and the Normalize effect doesn't allow you to see that.)

Honestly I never thought about doing it that way, which is strange because it's similar to the process I would use when doing this stuff in iZotope RX.  I'll try that way next time - thanks!
Title: Re: Normalization question - best practices
Post by: voltronic on February 06, 2017, 07:11:21 PM
If I have annoying audience stuff between songs I just cut that couple seconds completely (unless it's to sync w video). Usually you can make it sound seamless.

For my purposes, I'm usually dealing with loud applause near the mics immediately after the music, and that is the stuff I tend to apply compression or limiting to.  Other things get cut as you suggest.
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 06, 2017, 08:17:23 PM

I don't get it, if you set the threshold above the music you don't want to affect, you have done no harm. You've only compressed (and "turned down") the first five seconds.


Compressing is not the same as turning down the volume. Compression changes the volume ratio of the loudest bits to the less loud bits. What you want is to turn down *everything* in those first five seconds, which is going to take volume reduction (different audio editors will call it different things, but none should call it compression).
Title: Re: Normalization question - best practices
Post by: lsd2525 on February 06, 2017, 08:42:00 PM
Yeah, those first 5 seconds were everything. I hit record, the music started, immediately hit the red, turned down quickly and the rest of the show clocked in under -6 db. If I can get the first 5 seconds under -6, then I can normalize the whole shebang.If I try to do it as is, it's going to say i's already at 100% because of the first 5 seconds. I don't really want to chop it because it is part of the music. I'm thinking the fade in is the way to go. I'm currently uploading to google drive. If anyone wants access, PM me your email and I'll send and invite. Im sure I could make public but not sure how. If you like Zappa, check this out. On my playback system, this might  be the best 007 I've pulled. Would love to hear <constructive> comments.

One other thing-exceeded 2 hours so split into two file. Would like to normalize at same rate and the split was in the middle of a song anyway, Can I append the 2nd wav to the first before I do the track splits?

Sorry for the sad ass questions. I haven't tried to do much post processing since the mini-disk days. To paraphrase Bob Dylan, things have changed.....
Title: Re: Normalization question - best practices
Post by: noahbickart on February 06, 2017, 08:57:36 PM

I don't get it, if you set the threshold above the music you don't want to affect, you have done no harm. You've only compressed (and "turned down") the first five seconds.


Compressing is not the same as turning down the volume. Compression changes the volume ratio of the loudest bits to the less loud bits. What you want is to turn down *everything* in those first five seconds, which is going to take volume reduction (different audio editors will call it different things, but none should call it compression).

It brings the volume of everything above the threshold down. Which is the point.
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 06, 2017, 11:17:34 PM

Compressing is not the same as turning down the volume. Compression changes the volume ratio of the loudest bits to the less loud bits. What you want is to turn down *everything* in those first five seconds, which is going to take volume reduction (different audio editors will call it different things, but none should call it compression).

It brings the volume of everything above the threshold down. Which is the point.
[/quote]

It doesn't bring the volume of everything down equally, though.
Title: Re: Normalization question - best practices
Post by: lsd2525 on February 07, 2017, 08:40:31 AM
https://drive.google.com/file/d/0B0kY9jfraoqxUDRuRk5zazNUMms/view?usp=sharing
https://drive.google.com/file/d/0B0kY9jfraoqxcmxoU0hnZFhIdlk/view?usp=sharing
Title: Re: Normalization question - best practices
Post by: TheMetalist on February 07, 2017, 11:43:28 AM
Nice recording, mate!

I made an attempt to help you. The first five seconds was very loud. Like 20 seconds in it seems like you lowered the volume a bit more again. I tried to equal that part as well. If you think it's okay I'll send you the full lossless edit.

No normalization or compressor was used. Just a graphic fader.

Here's a lo fi sample of the first minutes:
https://we.tl/irP0xRQztL

/C
Title: Re: Normalization question - best practices
Post by: morst on February 07, 2017, 02:16:03 PM
One other question: The right channel is a couple db's less than the left. There is a way in audacity to normalize each channel independent from each other.......right?


Yup. In the drop down menu next to your file name there is an option to "SPLIT STEREO TRACK"

Depending on the version you are using, there may be a checkbox for "Normalize stereo channels independently" when you use the normalize feature.

AAH but once you have split the stereo tracks, you can use their INDIVIDUAL channel gains (as long as the tracks take up enough vertical screen space that the control is visible!) to adjust them. Remember that the Normalize plug-in takes time to run, and renders a new full length file (it does it in little pieces but the space used is the same) which takes up hard drive space in your audacity project folder. Channel gain does not take the time or the disk space.

Quote
Honestly I never thought about doing it that way, which is strange because it's similar to the process I would use when doing this stuff in iZotope RX.  I'll try that way next time - thanks!

I saw your workflow and figured that you would want to know about this.

Why not use the channel gain feature in Audacity? The slider on the left above the pan control, when the track is large enough to see it. I do use the amplify effect as a tool to tell me how much it suggests, but then I cancel it, and then I set the channel gain to a bit less than that amount. If you're going to be exporting anyhow, this takes less processing.

Once the non-musical noises are knocked down, my next step is to select the entire recording, then use the Amplify affect to raise the max level to near 0 dBFS.  (I could use Normalize for this which is what I'm really doing, but I like to keep track of exactly how much I'm adjusting the level, and the Normalize effect doesn't allow you to see that.)
Title: Re: Normalization question - best practices
Post by: Gutbucket on February 07, 2017, 05:25:10 PM
A fade is just a specific type of volume adjustment- one that changes over time.  First pull down the level of the overly loud portions at the start to match the rest of the file.  Then do whatever else you feel needs to be done- fades, normalization, EQ, compression, tracking, and whatever.

Keep in mind that if the peak levels of the left and right channels are different, normalizing them independently will affect the Left/Right stereo balance.  If doing it that way at least give it a listen afterward to be sure the stereo balance is alright.  Instead, I recommend ignoring any imbalance in the numeric RMS or peak levels and just adjusting stereo balance by ear to whatever sounds appropriate, then normalizing the file in the the usual way (as a channel-linked stereo file), which will raise peak levels to whatever you specify while retaining stereo balance.
Title: Re: Normalization question - best practices
Post by: nulldogmas on February 07, 2017, 07:44:14 PM
No normalization or compressor was used. Just a graphic fader.

Here's a lo fi sample of the first minutes:
https://we.tl/irP0xRQztL


Nicely done. And yes, nice recording!
Title: Re: Normalization question - best practices
Post by: morst on February 08, 2017, 12:20:27 AM
The more I read this site, the more I think that I want to try to carefully take Gutbucket's advice. This guy has a great perspective.

One thing I can add on the technical side right now, is that normalization affects the level according to peak values, but our ears determine channel balance via something more like AVERAGE level.

Because the average can be computed in a few different ways, GUTBUCKET has a great plan when he suggests to USE YOUR EARS to get it just right.

A fade is just a specific type of volume adjustment- one that changes over time.  First pull down the level of the overly loud portions at the start to match the rest of the file.  Then do whatever else you feel needs to be done- fades, normalization, EQ, compression, tracking, and whatever.

Keep in mind that if the peak levels of the left and right channels are different, normalizing them independently will affect the Left/Right stereo balance.  If doing it that way at least give it a listen afterward to be sure the stereo balance is alright.  Instead, I recommend ignoring any imbalance in the numeric RMS or peak levels and just adjusting stereo balance by ear to whatever sounds appropriate, then normalizing the file in the the usual way (as a channel-linked stereo file), which will raise peak levels to whatever you specify while retaining stereo balance.
*(Bold/Italics mine)

*deaf-guy edit:
I have hearing loss that's not the same in both ears, so i flip my headphones around when I making decisions like this.

here is my process:

step 1: Check balance using headphones in the normal orientation, with left cup on left ear, and right cup on right, and adjust playback levels to best "center" the stereo image
step 2: REVERSE HEADPHONES - Left on right ear, Right on left ear
step 3: Is the stereo image close to the center? Or is it shifted to one side as a result of my hearing loss?
step 4: Adjust playback levels and make a mental note.
step 5: Reverse headphones back around to correct orientation.
step 6: Is the stereo image close to the center, or is it shifted as a result of over-correction for my own personal hearing loss?
step 7: adjust balance to "split the difference"
step 8: go back to step 2 and repeat steps 2-7 until you are satisfied that the only channel imbalance that you can hear is a result of your own personal hearing loss.

note that if your headphones are not symmetrical, this whole thing is not gonna work. Evidently you'll have to check your headphones before you start, with a mono signal, doing the same reverse-maneuver to be sure that your headphones are up to the task.

Damn science!!?!

 >:D
Title: Re: Normalization question - best practices
Post by: Gutbucket on February 08, 2017, 08:37:55 AM
Thanks for the kind words, morst.

Your flip the headphones around trick is a good one!  I use that method frequently to check the material I'm working on and also check myself.
Title: Re: Normalization question - best practices
Post by: Pittylabelle on February 24, 2017, 08:01:44 PM
The first step I do in any post work is to go through the entire concert and apply limiting to any places of loud audience noises, applause near mics, etc. so that the non-musical things are knocked down to the level of the music (or at least close to that).  I usually do this in Audacity using the Hard Limiter with the dB limit set by ear, and with the Residue Level set to 0.7 to soften the limiter.  I am very careful to do this such that the music itself is not being affected.  I usually need to experiment with the dB limit to get to this point without introducing clipping.

What is the equivalent method in "SoundForge"?
Title: Re: Normalization question - best practices
Post by: morst on February 25, 2017, 04:19:05 PM
What is the equivalent method in "SoundForge"?

You could use a plug-in hard limiter like the one found under AUDIO UNIT called AUPeakLimiter.  I use the one called Mastering Limiter, that I got with the Isotope bundle, not sure if that comes with SoundForge for everyone or I just got a cool bundle.

They all have different controls but essentially the same function. Anything over the threshold gets smashed down to not go over -0.0dBfs, the variables are often pre-gain and attack and release time. Short times will be best for fast peaks.
Title: Re: Normalization question - best practices
Post by: Gutbucket on February 27, 2017, 10:29:45 AM
Overly short attack times can make the sound dull and lifeless.  Play with increasing the attack time somewhat to find a more appropriate balance between clarity, and brilliance and openness on the one hand, and sufficient management of the initial peak transient dynamics on the other.  Same goes for the other parameters such as threshold, ratio, and release time.

In general, "hard limiting" is going to be introduce more sonic artifacts and has a greater potential to sound less good than "soft limiting".  Hard limiting and extremely short attack times is more aggressive, but do you really need to try and extract every last fraction of loudness at the expense of a natural and open sounding recording with more appropriately managed dymamics?   We're not producing commercial recordings which that fight for attention by being louder than everything else, while the music suffers for it.  For our stuff, headroom of a few dB at the top with nothing in it is not a problem.  The user can just turn up the volume another notch if it's not loud enough.  That allows for far less aggressive limiter settings which are much less of a problem to get sounding transparent. 

Normalization should be thought of as separate and far less potentially problematic modification than limiting.  For most tapers, I'd suggest simple straight normalization to something conservative like -1dbFS, without trying to apply limiting.

Don't let the cure be as damaging as what you are attempting to correct.
Title: Re: Normalization question - best practices
Post by: voltronic on February 27, 2017, 08:05:18 PM
^ Gutbucket, while I'll agree with all of that in general terms, I will say it doesn't work for all situations.  I'm always recording music with a relatively wide dynamic range, and when there is loud applause between numbers closer to the mics, that interferes with raising the level of the overall concert to where the softer sections are even at a listenable level when the amp level is cranked at output.  The applause peaks leave little room for overall level raising through normalization, which is why I almost always need to apply limiting first to knock those areas down, which will then allow me to normalize to the loudest musical peak.  In other words, normalization on its own in such a situation does nothing.

Now, to be fair I don't really use a straight hard limiter, which does introduce artifacts as you say - I start with default "hard" limit settings and then soften by ear to where it's still doing its job aggressively enough without audible artifacts.  I also try to never apply limiting to the music, and if I do it's on a very narrow stray percussive peak or something like that.  Maybe that seems like a cumbersome way to do it, but it works for me.
Title: Re: Normalization question - best practices
Post by: Gutbucket on February 27, 2017, 09:40:13 PM
That's totally reasonable. Applause louder than the music is a problem, especially for wide dynamic stuff with quiet passages, and it needs to be brought down in level some how.  It's less critical to reduce level of the applause as transparently as than music, but its still really distracting if I notice it it working..  not uncommon when its used overly aggressively or not set well.

It just distracts too much from the willing suspension of disbelief to hear the limiting working. In which case I'd rather keep my finger on the volume knob.
Title: Re: Normalization question - best practices
Post by: daspyknows on February 28, 2017, 07:13:18 PM
You could use the Volume Envelope to reduce the first 5 seconds and then normalize it.

I think most audio editing software can do that. I haven't had to do it in a while luckily.

you didn't go past zero did you?

I like where these guys are going, but I'll play devils advocate and say to raise the levels of everything AFTER the blowout up as high as you can before you normalize.

On an unrelated note, if anyone can suggest a "drunken jackass" filter for the guy right next to me who yelled "Wowie Zowie" into my mic all night, even after Dweezil told him they would be playing the setlist, that would be gravy
If you have access to enough valium to hand them out, those and a couple rounds of beers usually quiet folks down.

I had "That Guy" next to me at Experience Hendrix.  I was thinking that North Korea nerve agent used to whack the half brother would work pretty quickly.
Title: Re: Normalization question - best practices
Post by: morst on March 01, 2017, 03:50:13 AM
Frog darts, bro. Frog darts.

I had "That Guy" next to me at Experience Hendrix.  I was thinking that North Korea nerve agent used to whack the half brother would work pretty quickly.

When I have to reduce audience applause, I try to fix that with the envelope tool and export a cleaned up version, and send THAT to the mastering limiter.

I would not want to apply a limiter to sustained musical peaks, only brief (percussive or plosive) ones.
Title: Re: Normalization question - best practices
Post by: voltronic on March 01, 2017, 06:44:07 PM
It just distracts too much from the willing suspension of disbelief to hear the limiting working. In which case I'd rather keep my finger on the volume knob.

Totally with you there.  I'd rather have the occasional clip in a recording if I set the level too hot as opposed to hearing limiter pumping.

Unless I'm running 4 channels, now I always record safety tracks for anything large-ensemble, especially with omnis.  Anything to avoid limiting at the time of recording, as you're never getting that dynamic range back.