Remastering Tips – Magic Man Part 3

Now we’re going to get into more dangerous waters. Bringing up the subject of dynamics compression at a dinner party of audio engineers is like bringing up Mac vs. PC with computer geeks, or XBox vs. Playstation with console gamers.

As I said before, the argument stems from over-use. In my opinion, compression is always needed on a mix, unless it’s incredibly well balanced to begin with, contains a good amount of compression on the individual track recordings, or came from all-digital source instruments. Even then, you’ll probably want to compress somewhat, unless you know your target audience is going to be sitting in a dead silent room, on a couch in front of an expensive stereo system with a nice glass of scotch in hand, quietly focused entirely on listening to the music. And who (besides me) does that?

Ok, so now I’ll take a deep breath, and try to describe what a compressor does. I find that all the technical explanation in the world doesn’t help a lot with using compression, but at least if you know the idea, it can help with figuring out which settings to mess with if you’re not liking what you hear. Compression basically squeezes out the volume differences in a recording. Think of it like a robot hand on the volume knob that is able to quickly turn it down when the sound gets too loud, and back up again when it drops. I’ve always liked this analogy, so I’m going to stick with it. There are two main settings on every compressor, threshold and ratio. The threshold is the volume, in dB, at which the robot hand should start paying attention. Everything below that level should remain unaffected. The ratio tells the robot how hard to twist the volume knob in response to signals above the threshold. As implied, it is a ratio of input to output, so if your ratio is, say, 2:1, it will turn things down enough to make the output only half as much “louder” than the threshold as the input signal. Expressed another way, if a signal goes 2 dB over the threshold, the volume will get turned down to bring it to only 1 dB over the threshold. The higher you set this ratio, the more “squashed” your output will be. In fact, another type of dynamics control called a limiter is really just a compressor with an infinity:1 ratio. The volume gets turned down so that no matter how high the input signal becomes, it’s always turned down enough to not exceed the threshold.

Other features of a compressor…Attack and release times: Briefly, these tell you how quickly the volume knob gets turned in response to volume changes. Peak vs. RMS sensing: This determines whether the robot is reacting to individual spikes in the waveform, or root-mean-square averaging of the signal that is more indicative of “power”. This is useful if your track has a lot of spikes in it, and you find the compressor overreacting in response to a lot of attack transients. You’ll know it when you hear it, trust me. I usually use RMS sensing when remastering, because I know that RMS more closely aligns with human perception of volume, so the result is more natural-sounding, and there aren’t usually any errant transient peaks in something that’s already been mastered that would need fixing. Make-up gain: What we’re talking about here with compressors is technically referred to as “downward compression”. We’re only ever correcting the volume downwards in response to higher input signals. (Upward compression is a whole different thing, done by an expander, and that’s a whole separate discussion) Since we’re turning things down all the time, make-up gain is like a second volume knob that stays fixed at a certain increased amount as overall compensation for the volume reduction the compressor is doing. Look-ahead: Our robot is quickly reacting to input signal as it comes, and making all these volume adjustments for us. But it can only deal with signals once they’ve come in. Real world hardware compressors must work like this, because they can’t see into the future. There are tricks in hardware that can overcome this, which are a little like live network television broadcast censors. They can bleep out all the ‘fucks’ as they come in, but the entire broadcast ends up being delayed a few seconds to allow this to happen. But in digital editing of an already recorded track, we know the future, so we can let software compressor in on it. Then it can react right at a sample level, and not let those transient peaks through because it couldn’t react fast enough. ’nuff said.

Whew. Ok. So let’s do this. Here again is the waveform we’re working with:

Capture 8

At this point, I will state a compressor preference. Sony’s Wave Hammer is like magic for me, and I’ve been using it since somewhere around 2001. In fact, it is pretty much the only reason I keep Sound Forge installed at all times. It has some super-secret-sauce features beyond what a regular compressor has, and a healthy set of presets for dealing with individual instrument tracks, and mastering. I like to fire the thing up, and start with the “Smooth Compression” preset. Here’s what it looks like in action:

Capture 9

This is a capture of it while previewing. I just wanted to point out that the red bar on the right side shows you how much the compressor is actually clamping down on the input signal. All software compressors that you can preview with include a display like this to show you in realtime what they’re actually doing.

So this is a good starting point, but now there’s a bit of tweaking to do. This involves a lot of listening, making slight adjustments, listening again, etc. As I said before, I like RMS sensing better for this kind of thing (Scan Mode: RMS in this case), and I found that the threshold and ratio were a little too aggressive, so I ended up adjusting them. I also checked “Use longer look-ahead”, although I honestly didn’t hear much of a difference.

The “secret sauce” I referred to seems to be a combination of ‘Auto gain compensate’ and ‘Smooth saturation’. You’ll notice that Output gain (which would be the make-up gain I talked about earlier) is set to zero. But still, some amount of volume increase happens to the quieter parts of the track. By the way, this has nothing to do with the Volume Maximizer tab that I’m not showing here. In this preset, it’s all zeroed, and you can bypass it entirely with no effect whatsoever. Anyway, whatever the authors of this tool have done, I’m loving what I hear, so here are the settings I chose before hitting ok:

Capture 10

Here’s what the resulting waveform looked like:

Capture 11

You can see that it doesn’t look like a huge difference, and in fact it really isn’t. Again with the subtle adjustments philosophy. But every section of the track is slightly higher overall, and fatter. It sounds like it too. Here’s the output, if you want to give it a critical listen: ramp(normalized)(slight hammer).flac

Sorry, no sparkle yet. That’ll be next time…


Part 4 here:

Remastering Tips – Magic Man Part 2

Alright, so now we have a solid clip to work with. Let’s pull it up again and take a look:

Capture 3

At this point, what I always do when I’m either remastering something old, transferred from a non-digital source, or simply mastering something I made, is normalize it. This is so basic that many editors don’t consider it an effect at all, so you might find it under your Edit menu, or in the case of Sound Forge, under the Process menu. (Audacity puts it under the Effect menu, and Adobe Audition puts it under the Amplitude and Compression section of the Effects menu)

Normalization is a dirt simple process that simply scans the whole waveform, finds the highest peak, figures out a multiplier that will take that peak to the maximum value that can be represented by your waveform’s bit depth, and multiplies every sample by that amount. Too fancy sounding? In real terms, it cranks the master volume knob up to the maximum it can go without causing any distortion. No matter what you’re doing during a mastering process, you want to start from this point, because you won’t be mixing tracks together so you won’t need any headroom, and you want to push everything to the best  workable volume before you start manipulating anything.

Now that I’ve said all this, I have to point out that in this example, starting with normalization does nothing for us. In general, this is true whenever you’re working with already produced digital sources. You will almost never come across a commercially available recording that isn’t already normalized. If you look again at the waveform, you’ll see that near the end of it, there are individual peaks that pretty much max out the waveform’s boundaries, so there’s no room to boost the volume on this without those peaks getting clipped.

This leads us directly to one of the main problems I have with this recording, and a brief interlude…

Brief Interlude – Mastering/Remastering Strategy

Whether you’re mastering new material, or remastering existing stuff, you have to have a plan. It isn’t enough to just say “I want it to sound better”. You have to be able to determine the issues and limitations of the recording you’re working with, and figure out what you want to achieve. If you really don’t know, it can help to compare it to an existing recording that sounds like what you want, and figure out the differences. Things like “it’s generally louder”, “it has punchier bass”, or “it’s brighter and more detailed”.

In this case, I knew exactly what my problems were with the original. The first was that the dynamic range was too great. The difference between the quietest and loudest portions of the song are so dramatic that you need to adjust the volume over the course of the song to hear it properly, or else you either get annoyed that you can’t make out the beginning, or you blow your eardrums out near the end. As an aside, I understand the artistic direction they were going for here. It’s a “hidden” bonus track, so they wanted it to sneak up on you, and they wanted to build it up dramatically to overwhelm the listener. Sure, that’s great when you first discover the track, but I’ve been listening to it on repeat for months, so the novelty of that wore off right away, and I just want to listen to it without fiddling, while still preserving some of that effect.

The second issue I have is that it lacks clarity. Mostly in the high end. There is some content there to work with, thankfully, but overall the mix in the first half of the track is so dull sounding that I’d like to make the vocals and metal of the drum kit clearer. That’s entirely subjective. I just wanted it that way, because I generally like my recordings to sound crisp, clear, and detailed.

Those are the two things I was looking for, and I’ll give a piece of universal advice here. The order in which you do things is actually important. As a rule of thumb, which I’ve learned for myself, and other engineers have also confided about, you always want to do any volume or dynamics processing BEFORE you take on EQing issues. This is because odd things can happen when you’re changing the volume of a signal, especially using dynamics compression, that will change the character of the sound. If you EQ before that, you may find that you’ve overdone something, and have to go back, or that you’ve somehow defeated that you were trying to accomplish, and need to do more EQing after. The fewer steps in your process, the better, right?

So this is why the first thing I’m tackling is the volume issue. Here we go…

Volume Shaping With Clip Envelopes

I’ll admit it, I almost always throw a compressor on things right away. Yes, if you google “loudness war” you will find a ton of digital ink spilled about it. I agree with everything they say. But like anything, it is a case of “too much of a good thing” (and for non-artistic motivations) that has ruined so many recordings in the last few years. Compressors/limiters with gain compensation can achieve incredible things though, when used properly, and I’ll be talking about that next time.

The reason I’m not talking about it THIS time, is because I have to confess that I spent at least 45 minutes with this track throwing every dynamics compression tool at it I could think of, and nothing sounded good (yet). I’ll spare you the details, but I quickly realized that I had to do something much more basic and global to the track before I could get to that degree of volume tuning.

So the real problematic portion of this track is from around 1:48 to about 2:37. You can see what they’ve done just by looking at the waveform. The volume steadily increases, in an almost exactly linear fashion. It’s a fairly simple matter to “undo” this, and see if it helps.

For this, the easiest way is to pull the waveform into a multitrack view. Don’t ask me why most editors don’t let you do this easily in a single wave edit view, but it’s generally true. My preferred editor for this is Adobe Audition, but most editors work pretty much the same as this. The feature we want is almost universally called “clip envelopes”. To see and/or edit them, you may have to find setting in your pulldown menus that enable them. Audition puts all those under the view submenu, like this:

Capture 4

In Audition, you have some additional fancy features for working with envelopes (under Clip->Clip Envelopes submenu) that let you create smooth splines. For this, I really don’t need that. Straight lines work fine for me!

Once you’ve enabled these things, you’ll have the ability to graphically manipulate the volume (and pan, if you want to) of the track over time. Every tool I’ve ever used like this puts the volume line at the top, and the pan in the middle. In Audition, the green line at the top is the volume envelope we want to work with. To start, it simply looks like this:

Capture 5

The tiny white boxes at the upper left and right edges of the picture here are control points that you can click and drag to move this line around. As you’d expect, dragging down will lower the volume at the control point, and you’ll end up with a diagonal line connecting the two points. Fairly intuitive, once you start doing it. As an added bonus, when you hover over or drag a control point, a pop-up will show you the value, in dB, for that point. At +0 dB, you’re at full volume, and dragging down will bring you into the negatives.

Aside: dB, or decibels, are weird man! So is human hearing and our perception of “loudness”. Without getting too far into it, an increase or decrease of 3 dB is technically a doubling (or halving) of power, but it takes a 6dB change to result in a perceived doubling (or halving) of volume.

Where was I? Oh yeah, clip envelopes, control points…right. So you can create any control points you want just by clicking anywhere on the envelope line that doesn’t already have one. Drag ’em around, and make any kinds of volume adjustments you want. I ramped down the volume between 1:48 and 2:37, and then fairly quickly brought it back up again to full immediately after the last explosive drum hit. This involved a fair bit of zooming, panning, scrolling around, and auditioning the changes to make sure it sounded right, so this can take longer than you think. I settled on a drop of 5dB at the lowest point, because it ended up sounding reasonable, and flattened that section of the track almost perfectly. Here’s a zoom-in of what my envelop looked like:

Capture 6

When you’re working in multi-track, you typically have to “mix down” the results to get an output waveform. In Audition, it’s under file->Export->Audio Mix Down. This will save out the envelope changes to a new file. Pulling that file back into a regular wave editor, here’s what I got:

Capture 7

Pretty sweet. It still sounds good to my ears, and I bought myself nearly 6dB of headroom to work with. Hell, I’m going to normalize this right away! As I described above, that will maximize the volume…but now, it will effectively double the volume of the entire first section of the track, and bring the very loudest parts back up to roughly where they were before. That’s a simple matter of finding your normalize function, picking 100%, applying it, and saving out the result. Here’s what mine looked like:

Capture 8

That’s a lot of progress, IMHO. I’m not nearly done, but this is a good point to take a critical listen: ramp(normalized).flac

…and about critical listening, some more advice. These kinds of changes can be very subtle. In fact, that’s often what you’re going for with remastering – making changes that improve things in a way that most people can’t quite put their finger on. So listen carefully, and try not to look at what all the waveforms, envelopes, knobs, and sliders are saying you should hear. Don’t even look at the screen! And try not to think too much about what changes you’ve made. Also remember that music listening is an emotional experience. The track you started with must have inspired you in some way that made you want to work so hard on it (unless you’re getting paid), right? If what you’re hearing makes you smile (or want to bawl your eyes out, or jump around the room like a madman) as much, or hopefully more than what you started with, you’re going in the right direction.

In the case of this track, you might notice that even though the entire last section looks like it’s all at the same volume, it still seems like it’s building and getting more powerful. There are other tricks the band has pulled to make it sound that way, and messing with the volume doesn’t seem to have diminished this. At this point, I’m still loving the track as much as I ever did, and now I can hear it all just a bit better, so I call this a win so far.

Next time, more dynamics tweaking with compressors, and maybe some sparkle.


Part 3 here:

Remastering tips – Magic Man hidden bonus track example


The hidden bonus track at the end of Magic Man’s 2014 release Before the Waves is a reworking of “South Dakota” which originally appeared on Real Life Color, released in 2010.

The new version is beautiful, but difficult to listen to because of it’s extreme dynamic range, and slightly muddy EQing. Because I’ve been listening to this track endlessly for months now, I decided to do something to make this track sound as great as it deserves to sound, so I could listen to it myself without the mixing issues affecting my enjoyment of it.

If you want to hear it, albeit stripped of some of it’s glory via the anti-miracle of lossy mp3 compression, you can check it out at

Edit: Here’s a FLAC, so you can hear it properly, and decide if I did a good job or not – Man – South Dakota (umdesch4 remaster).flac

Since this actually turned out to be a lot of work, I decided it might be worthwhile to write a tutorial about what I did.

Part 1 – Prepping the track

This is admittedly dull stuff, but even here, there are some things to talk about that I feel worth sharing.

First and most obvious, you have to choose an audio editor in which you’re comfortable with the basic operations of scrolling around a waveform, zooming in and out, and standard cut/copy/paste operations.

I generally use two tools to do the bulk of my work: Sony’s Sound Forge Audio Studio, and Adobe Audition. In general, I find Sound Forge easier to use for extremely basic stuff like this, without too much interface clutter. Anyway, whatever you’re comfortable with should be good enough for this kind of thing.

So, here’s what I started with. You can see that there’s the “regular” part of the track (track 12, It All Starts Here), followed by a big chunk of silence, ending with the track I want to work with:

Capture 1

Rough cropping of this is fairly straightforward, but this particular example presents a bit of a challenge (not unlike many other things I’ve dealt with in the past). Where are the actual start and ending points of the bonus track? It’s tricky because there’s a fade in, and the overall volume of the intro is so extremely low that it’s hard to tell exactly. You can argue that it is simply a matter of zooming in to where you think the beginning section of the track is, cranking your monitor volume, and listening for it. That may not be good enough. Especially when dealing with 24 bit samples, it may turn out that the noise floor of the whole output chain to your ears is higher than the actual signal. This wouldn’t matter much now, but if what you intend to do later involves some heavy boosting (normalization, dynamics processing, whatever), it may turn out that the final result magically rises about this noise floor, and now you get to hear exactly how you’ve missed the start of the track. Whoops!

In this case, it turned out that I could hear the actual start (more on that in a minute), but just in case, you can do a visual inspection too. The simple trick is to zoom in on the area where you think it is, and then zoom vertically (ie. amplitude-wise) as far as your editor will allow you to go. In Sound Forge it ends up looking like this:

Capture 2

In Sound Forge, at least, the +/- buttons on the far left side are what you spam to achieve this view. The +/- buttons on the far right are for zooming in and out in time. (Sorry if that’s obvious, but hey…)

So now you can clearly see that there’s definitely signal at around 5:38. Listening to this at a fairly high volume, it seems like the musical fade in starts much closer to 5:40. Indeed it does, but there’s something interesting going on during those first 2 seconds. The band decided to start the recording a little early, and if you seriously crank it, you can hear a 60-cycle hum coming from their guitar amp. It’s subtle, but it is (IMHO) a powerful subliminal cue that sets up the whole character of the kind of recording you’re about to hear.

What you want to do with this is crop your recording, keeping roughly a half-second before this point. The reason for this is that, especially with advanced dynamics processors, there is often an option for “look-ahead” processing, and you want to give it at least a few extra handfulls of sample-space to work with. Also, caution here is a good thing, as you can always crop more after you’ve done everything else.

For the end point, you can use the same techniques. The difference is that you want to leave even more room at the end. It may be that you feel the need to introduce some slight mastering reverb to the source, and you will need room for the reverb tail. Also (although not as important here), one component of aural enhancers/exciters involves time delay of specific frequency components, so if you’re going to get extreme with some of these exotic effects, you’ll need the room. If the original recording doesn’t have room for these tails, you may want to cursor to the end of the selection and use whatever “insert silence” utility your editor has to drop an extra second or two in there.

So yeah, that’s the easy part, but getting it wrong can lead to headaches later on, so take a few extra minutes doing it right. Measure twice, cut once, right? Once that’s done, save out the cropped results to a new file, and you’re good to start the real work.

Here’s a FLAC file with my results. It’s a handy reference to the original, so you can judge for yourself how well I did at the various stages in the next sections: Man – South Dakota (unprocessed).flac

Part 2 coming soon!  (here: )



Hi, I’m umdesch4, and I’ve been a hobbyist audio engineer for over 35 years, ever since the age of 4 when I figured out how to thread an open reel tape machine, and hit the red button. This was much to the chagrin of my parents, when they came into the room and discovered I had managed to erase a significant chunk of one of their good reels. Oh well…

Through the years, I’ve done all kinds of crazy experiments, making tape loop collages from dissected cassettes, chaining together tape decks to painstakingly achieve some tape delay effects, and do very rudimentary overdub recording. Once I got my hands on a computer (the first serious one with musical possibilities being the Commodore 64), I incorporated that into my toolkit. I’ve written SID chip compositions, later MODs on the Commodore Amiga, and began messing around with digital sampling. Also being a musician, I got deep into MIDI sequencing, at various points writing my own bits of software to accomplish various things.

I never let the real-life aspect of things slide either. Around that era, in the late 80s, I was also a DJ, and went the extra mile to make sure everything sounded good, and there were racks of lights with chase and strobe patterns linked up to the music I was playing.

Since those days, I’ve taken audio engineering courses, so I know my way around mic placement for a drum kit, studio multitrack recording, various automation systems, and I’ve helped bands record demos, done some ADR and foley work, even multi-track recorded a string quartet in an apartment with double mics on each instrument and an array of room mics too! I’ve done live sound for bands at local festivals, and done a lot of my own field recording. For every situation where the average person would be taking video or pictures with their mobile gear, you’ll probably find me with a “prosumer” digital audio recorder taking high resolution surround sound audio field recordings.

Oh, did I mention, I’ve also done a lot of surround sound recording, composing, and mixing? I started by developing my own “poor man’s surround” where I reverse-engineered Dolby Pro-Logic Surround, and injected my own filtered sounds into a stereo mix to produce the same effect. Then I figured out how to do true surround sound, and master it to DVD-A, which is still my preferred format.

Anyway, it seems like everything that I learn in my life…physics, math, electronics, programming…I try to relate back to how it can be applied to audio, so that’s very much a part of the kind of person I am.

Of course, I spend the bulk of my time these days doing various little audio tasks for people, and myself, entirely in software on a PC. That’s mostly what the next few posts in this blog are going to be about.

I hope some of the things I’ll be talking about are informative and interesting