Audio Normalization: Should You Normalize Your Tracks?
Audio normalization is a concept that comes up fairly often for new producers.
It’s a music production process that’s been around since the earliest days of digital audio, but is it still relevant?
It’s not always easy to tell whether to apply normalization or to avoid it completely—and getting it wrong can cause issues in your mix.
But what does audio normalization do? And how does it work? In this article I’ll break down what audio normalization is, when to use it and when to steer clear.
Let’s get started.
What is audio normalization?
Audio normalization is a process that increases the level of a recording by a constant amount so that it reaches a target—or norm. Normalization applies the same level increase to the entire duration of an audio file.
Normalization is typically used to scale the level of track or file to just within its available maximum.
If that sounds complicated, all it means is that audio normalization brings up the volume of a file to the maximum amount based on the loudest point.
Normalization is typically used to scale the level of track or file to just within its available maximum.
Types of audio normalization
There are two types of audio normalization:
Peak normalization looks at the highest peak in the audio file and adjusts all of the audio based on that peak. You can use this to make the volume as loud as possible, but it doesn’t consider what the final track will sound like to the human ear.
Loudness normalization takes into account the frequency content of the audio, adjusting the volume based on the perceived loudness. It creates the most natural and consistent results.
A common method of loudness normalization is RMS volume normalization. This averages the peaks of an audio file and adjusts the volume to that average.
When to use audio normalization
Audio normalization is a bit old fashioned by modern standards. With all the less invasive ways to increase a track’s gain out there, why would you need to know how to normalize audio?
In the early days of digital audio many components had limited performance when it came to dynamic range and signal-to-noise ratio.
Normalizing audio helped get the best results from primitive AD/DA converters. Normalization is still a common feature on hardware samplers that helps equalize the volume of different samples in the memory.
It’s handy in this situation because the dynamic range and signal-to-noise ratio remain the same as they were before.
You can use your sampler’s normalize function where needed as you build patches and presets.
Why not to use audio normalization
Normalization isn’t commonly used in many music production situations outside of sampling.
It might seem like a convenient way to bring tracks up to a good volume, but there are several reasons why other methods are a better choice.
Normalization might seem like a convenient way to bring tracks up to a good volume, but there are several reasons why other methods are a better choice.
1. Normalization is often destructive
Digital audio workstations, or DAWs, revolutionized music production with their non-destructive workflow.
What does that mean? Think of a strip of reel-to-reel tape—to perform an edit you need to physically slice it with a razor!
After that you can’t just go back to the original. But in your DAW you could simply drag the corners of the region out to restore the file.
Unfortunately there are some operations in the digital domain that are still technically destructive.
Any time you create a new audio file, you commit to the changes you make. Audio normalization sometimes requires you to create a new version of the file with the gain change applied.
If you do this and discard the original, you’re stuck with the normalized version!
Before you reach for the normalize function, make sure there isn’t another method better suited to the job.
2. Other methods are better
Modern DAWs have many different options for changing the level of audio signals.
Since normalization is a constant gain change, it works the same way as many other types of level adjustments.
Before you reach for the normalize function, make sure there isn’t another method better suited to the job. There are few situations where automation, clip gain or a utility plugin can’t fix your volume issues.
These operations are non-destructive and more flexible than normalizing the volume of entire tracks.
3. Normalization can create inter-sample peaks
Many new producers are looking for the easiest way to make their songs loud. It’s a common mistake to think that audio normalization plays a role.
When it comes to raising the level of an entire track, normalizing is among the worst options. In fact, normalizing an entire track to 0 dB is a recipe for disaster.
When it comes to raising the level of an entire track, normalizing is among the worst options.
The normalize function finds the highest peak in the entire waveform and raises it to the target. With this peak touching the 0 dB maximum, things get unpredictable.
When digital audio gets converted to analog to play through your speakers, the filters that reconstruct the signal smooth out the curve between individual samples in the file.
Sometimes the arc between two points close to the ceiling can exceed the maximum! The result is clipping from inter-sample peaks. It comes out as distracting harshness and distortion in your music.
How to normalize audio
Most DAWs include tools for normalizing volume. But there’s a better choice: gain staging.
Gain staging is properly controlling the levels inside your DAW. It means checking the volume of each element you record and making sure not to exceed a healthy level throughout your mix.
The general rule is to keep your tracks’ peaks around 9–10 dBFS and the body of your waveform at around -18 dBFS.
Gain staging is most important at your DAW session’s master bus. If you’ve left good headroom throughout your mix, you’ll have plenty of space at the master fader to ensure nothing will clip.
Gain staging is most important at your DAW session’s master bus. If you’ve left good headroom throughout your mix, you’ll have plenty of space at the master fader to ensure nothing will clip.
With all this extra room left over, turning your tracks up using the fader or any of the other methods I’ve mentioned should offer plenty of level control without resorting to audio normalization.
Mastering your mix
If you follow these guidelines for gain staging you might be surprised to hear how quiet your finished bounce seems in comparison to tracks on your streaming platform of choice.
It’s because finished tracks need mastering to get the right levels for consumer playback. Mastering brings up the overall loudness of a finished mix to exactly the right volume—no intersample peaks, no wasted headroom.
Unlike normalization, mastering turns up the volume dynamically so that even quiet passages can be heard clearly.
It takes careful processing and lots of experience to get a good master.
The easiest way to do it right is to hire a professional or try AI-powered mastering online.
Here’s what an unmastered track sounds like compared to a LANDR Master.
Try LANDR Mastering on Your Own Tracks
Get your levels right
Normalizing sound has its place in music production, but you need to be careful where you use it.
Headroom, gain staging and signal level all influence each other. Understanding that relationship is how you get the most out of your mix and master.
If you’ve made it through this article you’ll have a great starting point for managing your levels and making the right decisions about audio normalization. Want to learn even more? Join LANDR today to take your music to the next level.
Gear guides, tips, tutorials, inspiration and more—delivered weekly.
Keep up with the LANDR Blog.