What is audio normalization?

Published:

I have a distinct memory of driving down Route 222, listening to an early episode of the SciFi Diner Podcast. I could barely hear it over the engine of my Jetta and the din of highway noise around me.  When I would flip to The Instance, a podcast by Scott Johnson, I could hear the audio clearly.  I knew there was problem. Probably more than one in the early days.  I knew my audio recording levels weren’t high enough.  What I didn’t know at the time was that I could have fixed it by normalizing it.

When you normalize audio, typically in post production, you change its overall volume by a fixed amount to reach a target level. Unlike compression, which changes volume over time and in varying amount, it changes the audio file only in its volume. Track dynamics remain unaffected.

One reason to normalize an audio file is due to it being recorded too quietly. Like the illustration above, an audio file that is too quiet can cause problems for people on mowers or commuters.  Another reason you might want to normalize your audio is to get matching volumes on multiple tracks. For example, one of my co-hosts for the Dune Saga Podcast records his track on his end when we use Google Hangouts and then shares it with us via Drop Box. If our levels are not in sync, we could normalize them to get matching volumes.

So normalization just changes the volume level; however, its not a compression replacement. It can’t bring your highs down and your lows up.

There are different ways to measure volume that can dictate how you normalize. Peak Volume Detection considers only the highest peaks of the wave form. This is optimal if you are trying to get the max volume possible. Another way is RMS Volume Detection. This considers the average volume of the file and when normalization is used with this method, it will take it from the average rather than the peak. The human ear works this way and this method tend to give more natural results