Taperssection.com
Gear / Technical Help => Ask The Tapers => Topic started by: jeromejello on March 16, 2005, 05:11:40 PM
-
so i am tracking out this mmw show i caught on my jb3 the other day and i was thinking about raising the volume a tad (normalization, right). my question is should i normalize the full wav of the whole set or can i normalize each chunk that i tracked out. is there a difference is quality or does it cause any additional defects in the sound this way.
thanks.
-
Your gonna want to normalize the full file before splitting it...
is there a difference is quality or does it cause any additional defects in the sound this way.
Some would argue that any time you do digital processing to a file their would be artifacts or somthing but for the most part your okay with this. If your only bumping it up .5db I wouldn't bother.
-
Always normalize before tracking. Otherwise, some tracks will be amplified more than others and you will get jump discontinuities between tracks.
-
If you're using Audacity, I would recommend "Amplify" if you're trying to boost levels.
-
thanks for the info.. i kinda thought it would be better before. i chose not to do anything to it. if you are interested in checking it out, here's the info:
mmw 3-12-05 house of blues, orlando, fl
hop on if you can.
not sure about my upload capacity, but i will leave the computer on until i get a couple of other seeders. feel free to add any comments its my first time for taping/seeding.
http://bt.etree.org/details.php?id=12476
enjoy.
-
Always normalize before tracking. Otherwise, some tracks will be amplified more than others and you will get jump discontinuities between tracks.
qft
-
im not a big fan of normalizing. i prefer to adjust gain, or amplify as stated earlier.
as for tracking and normalization. if you normalize the entire set or show (if one set) after you track and PRIOR to splitting the tracks then you should be ok.
ymmv
-
im not a big fan of normalizing. i prefer to adjust gain, or amplify as stated earlier.
as for tracking and normalization. if you normalize the entire set or show (if one set) after you track and PRIOR to splitting the tracks then you should be ok.
ymmv
yes, it sounds like there may be a little bit a confusion in this thread.
There is gain which will boost every single second by a constant rate of dbs.
Normalizing is the process of 'smootihing out' the sound. ie, it will raise the real quiet parts and soften the real loud parts.
-
im not a big fan of normalizing. i prefer to adjust gain, or amplify as stated earlier.
as for tracking and normalization. if you normalize the entire set or show (if one set) after you track and PRIOR to splitting the tracks then you should be ok.
ymmv
yes, it sounds like there may be a little bit a confusion in this thread.
There is gain which will boost every single second by a constant rate of dbs.
Normalizing is the process of 'smootihing out' the sound. ie, it will raise the real quiet parts and soften the real loud parts.
No, normalizing is amplifying the whole thing by an amount that will cause the largest sample to barely hit either the top rail or the bottom rail. What you're talking about is compression.
-
I thought I was talking about this:
normalizing simply means to adjust the peak (top and bottom) volume of a selection to a known value.
Normalizing increases voume 'across the board' too? ???
Normalizing has does two meanings:
Normalizing, as far as Sound Forge or other digital audio editors are concerned, simply means to adjust the peak volume of a selection to a known value. Generally the recommended maximum is -0.5 dB (I think that's 94.49% or something). Doing this is a no-brainer.
Normalizing a set of tunes to be burned to CD, however, means something slightly different. Here it implies that you're adjusting the average volume (ie, 'across the board')of those songs so that they will all sound about equal. Doing this is an art (whereas we can consider mastering a fine art).
-- Dragon
http://www.homerecording.com/normalizing.html
It sounds like you are talking about the later case, no?
edit: yes, I think my original term of 'smoothing out' can be interpreted as compression but I think I'm talking about the first type of normalizing. I'm sorry for the confusion. I should have been clearer.
edit 2: hmmm. although here's another definition which seems more in line with what you are saying: Normalizing increases the level of the entire sound file so that the loudest part of the sound is at the maximum playback level before distortion; it then increases the rest of the sound proportionality.
-
Are we referring to the peak value or the RMS value..
It seems kinda crazy to me to Normalize the entire file (set) as your not getting a accurate values.. Some songs are written to be softer than others.
say you had a two track Acoustic set as the opening tracks there gonna be much lower in RMS value than the amplified tracks.
I open the master in SF7
track: (to give regions to each track)
master: Eq,fades,Normalizing, ect..
you can double click on the track/region and SF will highlight just that track.
then I can normalize just that track. ( I use -16db on most recordings and apply Dynamic compression if clipping occurs) But depending on how it sounds it may go higher or lower.
resample: using the highest Interpolation setting and the anti alias filter .
extract the regions : (export separate tracks)
and wammo you have one set ready to flac
YMMV
Nick
-
Are we referring to the peak value or the RMS value..
If necessary - which is rare - I'll amplify the entire file set so the single most peak value approaches 0.
It seems kinda crazy to me to Normalize the entire file (set) as your not getting a accurate values.. Some songs are written to be softer than others. say you had a two track Acoustic set as the opening tracks there gonna be much lower in RMS value than the amplified tracks.
On the contrary, normalzing the complete fileset is the only way to ensure accurate relative values. Amplifying on a track-by-track basis is inaccurate and destroys the original dynamic range of the recording.
-
Are we referring to the peak value or the RMS value..
If necessary - which is rare - I'll amplify the entire file set so the single most peak value approaches 0.
It seems kinda crazy to me to Normalize the entire file (set) as your not getting a accurate values.. Some songs are written to be softer than others. say you had a two track Acoustic set as the opening tracks there gonna be much lower in RMS value than the amplified tracks.
On the contrary, normalzing the complete fileset is the only way to ensure accurate relative values. Amplifying on a track-by-track basis is inaccurate and destroys the original dynamic range of the recording.
bingo, this is the info i was looking for (as a confirmation of what i thought). i would rather not do anything to the wav if possible (and definately nothing that i would seed - too many people get picky about it)
-
If necessary - which is rare - I'll amplify the entire file set so the single most peak value approaches 0
On the contrary, normalizing the complete fileset is the only way to ensure accurate relative values. Amplifying on a track-by-track basis is inaccurate and destroys the original dynamic range of the recording.
Why not just raise the Amplification or Volume of the entire recording .. seems as it would do the same thing.
I think as long as you dont Raise and lower each track by huge amounts the dynamic range will still be intact.. (Such as dont normalize one track to -20 and another to -5) . I usually dont have but a db or two difference in each track . which is about where it was to start with..
Seems to my ears that Normalizing the entire "master wav" is "sorta" like selecting the Normalize all files in your burning program, as your gonna raise the Loud parts but the softer ("acoustic" if ya will) of the master are still prolly gonna be below a comfortable listening Volume.. and you are missing the fine detail of the recording. (such as the plucking of the strings)
I use a general rule of thumb that you should never have to turn your volume knob on your stereo over Half way or you simply need more power or in this case more db .. Seems like the same argument here.
Most the AUD tapes I hear are VERY low and you need to crank your stereo to get a decent volume level .
After watching a few tapers in the last year or so at shows I have taped, I find that most folks seem to be afraid to run there Pre a bit hot , for instance At the last show I was taping at the guy next to me Had his Ua-5 set so low that the clip light never came on The entire show.
Which would lead me to believe there recordings will suffer due to loss of the original dynamic range at the show that night..
For instance One the most recent tapes I pulled came in at -26db , Thats crazy low.
that means you need to crank your Stereo to compensate for the Low recording volume.
Still confused
Nick
-
Why not just raise the Amplification or Volume of the entire recording
Sorry for the confusion, that's what I meant by normalizing the entire fileset - perform the operation against the master recording all at once, not individual tracks or files.
I think as long as you dont Raise and lower each track by huge amounts the dynamic range will still be intact.. (Such as dont normalize one track to -20 and another to -5) . I usually dont have but a db or two difference in each track . which is about where it was to start with.
I think your comments are a little mixed up: in order to NOT raise or lower each track by huge amounts, we must utilize different RMS values for each track in order to maintain relative dynamic range, i.e. do exactly as you suggest we should not: normalize one so it's RMS is -20dB and another to -5dB.
Let's take an example: We have two tracks, [1] with an RMS of -21dB, [2] with RMS of -9dB. That's an RMS difference of 12dB across the two files. Now, let's normalize each track independently to the same RMS value, -8dB. That entails raising the RMS on [1] 13dB, and raising the RMS on [2] only 1dB. So now, both tracks have an RMS of -8dB. The RMS difference across the two files is now 0dB. Dynamic range: gone!
In this case, in order to preserve relative dynamic range - but still increase amplitude - we would want to normalize each track to very different RMS values, say increase [1] so it's RMS value becomes -16dB, and [2] so it's RMS value becomes -8dB. This brings the amplitude of track [1] up to a better level for listening (without having to adjust volume during playback). The dynamic range is reduced, but not removed. However, I believe there's a better way to "amp up" the recording but still maintain relative dynamic range...
I use a general rule of thumb that you should never have to turn your volume knob on your stereo over Half way or you simply need more power or in this case more db .. Seems like the same argument here.
The way to accomplish this, IMO, is to apply compression to the entire master WAV and then raise the dB across the entire master WAV. That way, you maintain the relative dynamic range, if not the actual dynamic range proper.
-
Brian's right about this. Normalizing amplifies everything by the same gain factor and sets the loudest sample of the original recording to 0 dB signal level (the maximum possible). If you want to bring up the quiet parts so that they are not so much lower in volume than the loud parts, then you use compression to intentionally reduce the dynamic range in the recording.
In my opinion, this is the procedure to follow if you are going to use compression:
First convert it to a floating point representation with the highest numerical resolution possible (usually 32-bit floating point), then normalize it, then compress it and re-normalize it. You will want to normalize it first because you want the loud parts to already be close to clipping. That way, when you compress it, the loud parts will not be amplified as much as the quiet parts. If you compress first and nothing is initially close enough to clipping, everything will be amplified equally (which is the same as just applying some amplification). By normalizing first, the loud parts are close enough to clipping that they will not be amplified as much as the quiet parts. After compressing, the loud parts will actually be above 0 dB and you will need to re-normalize the recording so that when you turn it back to a fixed point representation (like 16 or 24 bit PCM), it will not be clipped.
-
thanks for the clarification Guys..
Nick
-
the problem comes because different software programs call different processes "normalizing." (marc and i had a long discussion about this once)...
imo - changing the gain is fine, but normailzing and/or compression are not...
changing the gain for the whole recording is simply increasing the volume, so the loudest spot is close to 0db. this makes the loudest part as loud as possible without clipping, but leaves the dynamic range intact. replaygain does the same thing during playback (specifically album gain) if you do not want to alter your actual music files.
in my limited experience, if you have correctly adjusted your levels, it will not be necessary to change the gain...
normalization, "volume," etc. are all no-no's because they alter dynamic range by amplifying certain parts more than others. this isnt as big of a deal with acoustic tracks in the middle of the set - where it really screws stuff up is within a track, where there are quiet parts and loud parts.
-
Normalization does not alter the dynamic range of the .wav you are working on. However, if applied track by track, it does alter the dynamic range of the overall show since some tracks will be amplified more than others. If normalization is applied to each track separately, each track is amplified to the point where it barely hits 0 dB. Since the amount of amplification required to do so varies from track to track, the quiet tracks are amplified more than the loud tracks and you have reduced the dynamic range of the show. Hoever, if you normalize the whole show, all parts are amplified equally and the dynamic range remains intact. This is equivalent to increasing the gain by exactly the amount that results in an audio file that barely hits 0 dB at least at one point. The loud tracks are still the loudest and the quiet tracks are still the quietest.
-
This is equivalent to increasing the gain by exactly the amount that results in an audio file that barely hits 0 dB at least at one point.
in some software programs, yes it is. in some software programs it is NOT the same thing, but more like compression (smoothing out the dynamic range, and the globally boosting the signal).
my point is that you should be familiar with the EXACT processes you software is doing when processing the audio files..
-
What program does it like you're talking about?
-
What program does it like you're talking about?
if i remember correctly, wavelab does it like this, and so did the last version of sound forge i tried (version 5, maybe)
-
this thread is exhibit A for why I cringe everytime I see a .txt file that mentions that the taper performed some sort of DSP.... many tapers have no idea what they are doing to the files but they do it anyway ::)
-
In wavelab, it only does what your are talking about if you are batch processing a bunch of files (who may all be individually amplified by varying amounts) and no version of sound forge that I've ever used does it the way you are talking about. (I've used the normalize function in version 5.)
Really, the problem is when you normalize multiple tracks, they probably are not all amplified by the same amount. If they were all part of the same recording, then the quiet tracks are amplified more than the loud tracks.
You definitely don't want to use the normalize function of a burner program because it will individually normalize the tracks. For instance, Nero gives you the option to normalize during the burning process. The only time that is useful is if the tracks are individual recordings. If a single recording is tracked out, then you don't want to separately normalize the tracks. You want to normalize the whole recording, then track it out.
-
this thread is exhibit A for why I cringe everytime I see a .txt file that mentions that the taper performed some sort of DSP.... many tapers have no idea what they are doing to the files but they do it anyway ::)
agreed.
i dont mind adjusting the gain, but other than that, the resample/dither step, and tracking, i do no processing to my recordings...
-
To add some basic Q's to this very interesting thread (I'm just starting to try my hand at editing some of my stuff - acoustic, but with WIDE dynamic range - so this is really useful info!)
First convert it to a floating point representation with the highest numerical resolution possible (usually 32-bit floating point)
Can you explain this for complete beignners? What is "floating point representation",why do we need it and what does it do? Tx. :)
then normalize it, then compress it and re-normalize it. You will want to normalize it first because you want the loud parts to already be close to clipping. That way, when you compress it, the loud parts will not be amplified as much as the quiet parts. If you compress first and nothing is initially close enough to clipping, everything will be amplified equally (which is the same as just applying some amplification). By normalizing first, the loud parts are close enough to clipping that they will not be amplified as much as the quiet parts. After compressing, the loud parts will actually be above 0 dB and you will need to re-normalize the recording so that when you turn it back to a fixed point representation (like 16 or 24 bit PCM), it will not be clipped.
Interseting and very helpful. My question here is - where are the compression features in Soundforge (which is what i'm using right now - I have Wavelab as well, but haven't had a chance to try it yet)? What sort of settings does one use? So far most of my editing has been done simply on a "that sounds better to my ears" basis, but I fear that I'm actually making a lot of elementary mistakes by having no understanding at all of the basic "technique" of how it works. All comments helpful! Since I'm opera, my biggest problem is that the dynamic range is very wide, and since nobody is monitoring levels while I sing there are usually compromises at one end or the other; while I don't like the sound that too much compression produces, a small amount will probably result in recordings which are closer to what we hear on commercial classical singing recordings, which might not be such a bad thing when I'm making demos etc...
Anyway, fascinating reading. Thanks for any further explanations!
-
Wav files are stored in a format known as linear PCM. Essentially, they use a signed integer representation of the samples. For instance, 16 bit audio has a maximum value of 32768 and a minimum value of -32767. Floating point representations are stored not as simple integers, but as an integer and a mantissa. Some of the bits are used to store the integer and the rest are used to store the mantissa. You can scale floating point numbers over a larger range without a significant loss in dynamic range than you can integers.
Example:
An integer representation of one 16 bit audio sample might be 11643. In a floating point representation, I could write that as:
1.1643 x 10^4
If I were to drop the level by 40 dB (that's division by 100), the integer would become 116, thereby losing the .43 fractional part of the number. On the other hand, the floating point representation would not lose that fractional part:
1.1643 x 10^2
If you wanted to increase the gain by 40 dB, then the integer representation would clip at 32768 (you really want 1164300, but there are not enough bits to represent it in 16 bits), but floating point would be 1.1643 x 10^6.
So, you don't see a reduction in dynamic range associated with gain adjustment.
Now, I'm not being careful with exactly how floating point numbers are represtented by the bits used for each sample, but you get the idea...
For way more than you probably want to know about floating point numbers, here is a link that explains it better than I can:
http://stevehollasch.com/cgindex/coding/ieeefloat.html (http://stevehollasch.com/cgindex/coding/ieeefloat.html)
(Thanks to Steve Hollasch for the information at this link!)
-
My question here is - where are the compression features in Soundforge
wave hammer
-
BWAH!
So, are you saying.... there are none in SF, or that the available ones suck? Enlightenment for those of us of the n00b persuasion... (I'm lol, though)
Actually, is there any appropriate place roun' these here parts to start a "mixing/editing tips and tricks for newbies" thread? I'd be really interested in learning more from you folks who know a thing or two :) . (Or maybe there's some stuff elsewhere online to read that people can recommend?) I realise it's all highly subjective and osmething you just have to learn by doing, but I suspect it's more than simply opening the eq and hoping for the best.... ;)
-
Two things- in case this thread isn't already dense and confusing enough. Here's my take on a couple of things I don't think were addressed.
One- using any kind of compression (not data compression like MP3 or FLAC but sound compression) will alter the sound of the recording. This is what FM stations use, and why when you look at an FM wav file it seems to be all the same level. I don't recommend doing this to a recording. If you do use it, consider saving an uncompressed version in case later on in life you decide you don't like compression. Compressors do have their use in live music production, and that is for the soundguy to add as he sees fit.
Two- If you have more than one set or file, normalize each set or file by the same amount, assuming you used the same record level for each file. Otherwise you may get the outcome that when you pop in the CD for set 2, or worse, when you hit the first track of set 2 on a CD, the volume will change signifigantly. It helps if your processing software tells you how much it will adjust the levels before doing it- check each file and figure out which one will have the LEAST boost, and use that amount for all of the files.
It is the soundguy's job to bring up the volume on the quieter songs and make any changes in volume between sets, IMHO, not the post processor's.
Here's a thought, too. While you can't add information by increasing the volume in post production, perhaps the playback algorithm (20 bit processing on a 16 bit wave, for example) will benefit from using larger numbers? It seems like it will be able to interpolate more precisely if there are more values available between samples. Or (less likely to me) will it remain the same or (least likely) get worse?