Category Archives for "Engineering"
I’m always surprised when I change something as simple as a cable and the sound lights up. That’s because there’s much more to cables that you’d ever think (check out my interview with Larry Smith from Wireworld Cable Technology for a great explanation between the differences in cables). The one thing that most guitar players aren’t used to though, is being able to directly change the tone of their instrument via the cable, and that’s just what the Undertone Audio Vari-Cap cable does.
The UTA Vari-Cap cable allows the guitar player to vary the amount of capacitance of the cable to produce a much wider variation in sound than you would ever expect. Basically you’re tuning the inductance of the pickups of the instrument and the capacitance of the cable to make a tuned bandpass filter, with generally surprising results. The cable has a small box at one end with a 15 position switch that adjusts the capacitance from 150 pF to 1,650 pF in 100 pF steps.
For many guitarists and engineers, it’s easier to just adjust the tone control on the instrument or amp, but the Vari-Cap does give you a different way to adjust the EQ to produce results that you won’t necessarily be able to duplicate with EQ alone. Although the changes are subtle from selection to selection, it’s pretty dramatic between position 1 and 15.
There’s not a lot of info about Vari-Cap cable itself on the UTA website. It does say that the connectors are made by Neutrik, but it doesn’t say how long the cable is (it looks like around 10 feet). That said, the price is $99.95. That seems pretty expensive for a guitar cable, but there are many high-end cables that cost twice this much and more, yet won’t give you the same amount of control.
Check out the UTA website for more information and some good explanatory videos, although the one below tells you pretty much all you need to know.
Let’s face it, recording budgets are tight these days and we can’t always send our final mixes to a true mastering engineer. With so many of the same tools that mastering engineers use now available to every mixer, it’s now possible to do a pretty good self-mastering job. If that’s your situation, it’s best to follow these following 7 steps excerpted from the latest edition of my Music Producer’s Handbook.
1. Don’t master on the same speakers you mix on. If you do, you won’t be able to make up for the deficiencies of the speaker.
2. Listen to other songs that you like before you even touch an EQ parameter. The more songs you listen to, the better. You need a reference point to compare your work with, and listening to other songs will prevent you from over-EQ’ing. EQ’ing is usually the stage when engineers who are mastering their own mixes get in trouble. There’s a tendency to overcompensate with the EQ, adding huge amounts (usually of bottom end) that wreck the frequency balance of the song completely.
3. A little goes a long way. If you feel that you need to add more than 2 or 3 dB, you’re better off remixing! That’s what the pros do. It’s not uncommon at all for a pro mastering engineer to call up a mixer, tell him where he’s off, and ask him to do it again.
4. Be careful not to over-compress or over-limit your song. This can lead to hypercompression. Instead of making a song louder, hypercompression sucks all the dynamics out of it, making it lifeless and fatiguing to listen to.
5. Constantly compare your mastering job to other songs that you like the sound of. Doing this is one of the best ways to help you hear whether and how you’re getting off track.
6. Concentrate on making all the songs sound the same in relative level and tone. This is one of the key operations in mastering a collection of songs like an album. The idea is to get them to all sound as though they’re at the same volume. It’s pretty common for mixes to sound different from song to song even if they’re done by the same engineer with the same gear. It’s your job to make the listener think that the songs were all done on the same day in the same way. They’ve got to sound as close to each other in volume as you can get them, or at least close enough so as not to stand out.
7. Finish the songs. Edit out count-offs and glitches, fix fades, and create spreads for CDs and vinyl records.
If you see a self-mastering job on the horizon, you’ll find that your results will be far closer to that of a mastering engineer if you follow these tips.
You can read more from The Music Producer’s Handbook and my other books on the excerpt section of bobbyowsinski.com.
There are some studios that have that magic sound for tracking drums, and the famed Sound City in Van Nuys, California was one. Every great sounding tracking room that I’ve ever been in has been a product of luck rather than design, and Sound City (which ha since closed) was no exception. The combination of a smooth reverb decay, a tailored frequency response, and finding just the right spot in the room makes all the difference.
Here’s an excerpt from Dave Grohl’s excellent Sound City film that talks specifically about the drum sound of the room. It features luminaries like Lindsay Buckingham, Mick Fleetwood, Rick Rubin, Jim Keltner, and Keith Ohlsson, among others.
We can talk about microphone placement, mics and preamps all day, but in the end, it still all starts with the basics – a great drummer, good sounding drums, and a great room.
If you haven’t seen the Sound City movie, you’re really missing out. It’s one of the best music films going.
One of the best ways to make all the elements fit in a mix is by frequency juggling. That’s where you make sure that no two instruments are boosted at the same frequency so they never fight for attention in the mix. Here are 3 steps from the 3rd edition of my Mixing Engineer’s Handbook to make frequency juggling work for you, as well as a couple of excellent quotes from Jon Gass and Ed Seay, some of the very best mixers ever.
Most veteran engineers know that soloing an instrument and equalizing it without hearing the other instruments will probably start making you chase your tail as you make each instrument bigger and brighter sounding. When that happens is that you’ll find in no time the instrument you’re EQing will begin to conflict with other instruments or vocals frequency-wise. That’s why it’s important to listen to other instruments while you’re EQing. By juggling frequencies, they’ll fit together better so that each instrument has its own predominate frequency range. Here’s how it’s done.
1. Start with the rhythm section (bass and drums). The bass should be clear and distinct when played against the drums, especially the kick and snare.
You should be able to hear each instrument distinctly. If not, do the following:
2. Add the next most predominant element, usually the vocal, and proceed as above.
3. Add the rest of the elements into the mix one by one. As you add each instrument, check it against the previous elements as above.
The idea is to hear each instrument clearly, and the best way for that to happen is for each instrument to live in its own frequency band.
TIP: You most likely will have to EQ in a circle where you start with one instrument, tweak another that’s clashing with it, return to the original one to tweak it, and then go back again over and over until you achieve the desired separation.
Jon Gass: I really start searching out the frequencies that are clashing or rubbing against each other, but I really try to keep the whole picture in there most of the time as opposed to really isolating things too much. If there are two or three instruments that are clashing, that’s probably where I get more into the solo if I need to kind of hear the whole natural sound of the instrument.
Ed Seay: Frequency juggling is important. You don’t EQ everything in the same place. You don’t EQ 3k on the vocal and the guitar and the bass and the synth and the piano, because then you have such a buildup there that you have a frequency war going on. Sometimes you can say, “Well, the piano doesn’t need 3k, so let’s go lower, or let’s go higher,” or “This vocal will pop through if we shine the light not in his nose, but maybe towards his forehead.” In so doing, you can make things audible and everybody can get some camera time.”
You can read more from The Mixing Engineer’s Handbook and my other books on the excerpt section of bobbyowsinski.com.
There’s been a lot of hits from the past that you continue to hear on the radio, but a perennial favorite is “(Don’t Fear) The Reaper” from Blue Oyster Cult. The song comes from the band’s 1976 album Agents of Fortune album, where it hit #12 on the Billboard charts and has been around ever since. You’ve probably heard the song hundreds of times, but you’re probably not aware of some of the very interesting things that are going on inside the mix that you don’t readily hear. Pull up some headphones and take a listen to the following.
1. The clean guitar playing the lead riff, which comes in the second time through the riff (which you don’t hear here).
2. This song is famous for its cowbell (thank you Saturday Night Live), but the percussion instrument that really stands out is the guiro (as seen on right).
3. The organ shadows the vocals. Here you can hear the organ leaning to the left, and the low harmony leaning to the right.
4. There’s what sounds like a clavinet playing whole notes in the B section.
5. In the bridge you can hear the doubles of the clean arpeggiated guitar and distorted guitar riff.
6. At the end of the bridge there’s a synth that doubles the feedback guitar (which you can’t hear here).
7. On the outro there’s an new keyboard shadowing the main chord pattern.
8. If you listen to the end, you’ll hear the ending that didn’t make the final mix on the record.
All in all, a very cool version of some buried in the mix isolated backing tracks that will have you listening to the track differently the next time you hear it.
When signal processing is timed to the pulse of the track, everything in the mix sounds a lot smoother. This applies to compressors, delays, modulators, and especially reverb.
One of the questions I get a lot is, “How do you time your reverb to the track?”
“Like with other aspects to mixing, the use of reverb is frequently either overlooked or misunderstood. Reverb is added to a track to create width and depth, but also to dress up an otherwise boring sound. The real secret is how much to use and how to adjust its various parameters.
Before we get into adding and adjusting the reverb in your mix, let’s look at some of the reasons to add reverb first
When you get right down to it, there are four reasons to add reverb.
1. To make the recorded track sound like it’s in a specific acoustic environment. Many times a track is recorded in an acoustic space that doesn’t fit the song or the final vision of the mix. You may record in a small dead room but want it to sound like it was in a large studio, a small reflective drum room, or a live and reflective church. Reverb will take you to each of those environments and many more.
2. To add some personality and excitement to a recorded sound. Picture reverb as makeup on a model. She may look rather plain or even only mildly attractive until the makeup makes her gorgeous by covering her blemishes, highlighting her eyes, and accentuating her lips and cheekbones. Reverb does the same thing with you tracks in many cases. It can make the blemishes less noticeable, change the texture of the sound itself, and highlight it in a new way.
3. To make a track sound bigger or wider than it really is. Anything that’s recorded in stereo automatically sounds bigger and wider than something recorded in mono, because the natural ambience of the recording environment is captured. In order to keep the track count and data storage requirements down, most instrument or vocal recordings are done in mono. As a result, the space has to be added artificially by reverb. Usually, reverb that has a short decay time (less than one second) will make a track sound bigger.
4. To move a track further back in the mix. While panning takes you from left to right in the stereo spectrum, reverb will take you from front to rear (see the figure on the left). An easy way to understand how this works is to picture a band on stage. If you want the singer to sound like he’s in front of the drum kit, you would add some reverb to the kit. If you wanted the horn section to sound like it was placed behind the kit, you’d had more reverb. If you wanted the singer to sound like he’s in between the drums and the horns, you’d leave the drums dry and add a touch of reverb to the vocal, but less than the horns.
If we were going to get more sophisticated with this kind of layering, we’d use different reverbs for each of the instruments and tailor the parameters to best fit the sound we’re going after.
Timing A Reverb To The Track
One of the secrets of hit-making engineers is that they time the reverb to the track. That means timing both the pre-delay and the decay so it breathes with the pulse of the track. Here’s how it’s done.
Exercise Pod – Timing Reverb Decay
Before you begin any of the exercises in this chapter, be sure to have two reverbs with the sends and returns already set up. Set one reverb to “Room” (we’ll call it Reverb #1) and the other to “Hall” (Reverb #2). Refer to your DAW or console manual on how to do this.
E8.1: Solo the snare drum and the reverb returns (or put them into Solo Safe – refer to you DAW or console manual on how to do this). Be sure that the Dry/Wet control is set to 100% wet, and the return levels are set at about -10.
A) Raise the level of the send to the Room reverb until the reverb can be clearly heard. Does the snare sound distant? Does it sound bigger than before?
B) Adjust the Decay parameter until the reverb dies out before the next snare hit of the song. Does the snare sound clearer?
C) Mute the send to the Room reverb and raise the level to the Hall reverb. Does the snare sound distant? Does it sound bigger than before? Does it sound bigger than the Room reverb?
You can read more from The Audio Mixing Bootcamp and my other books on the excerpt section of bobbyowsinski.com.
Whenever an engineer has trouble dialing in the EQ on a track, chances are its because of one or more of the 6 often-overlooked trouble frequencies.
These are areas where too much or too little can cause your track to either stick out like a sore thumb, or disappear into the mix completely. Let’s take a look.
Sometimes just tweaking a few of these 6 frequency ranges can take a mix from dull to exciting, or muddy to clear, so keep these “trouble frequencies” in mind during your next mix.
You can read more from The Mixing Engineer’s Handbook and my other books on the excerpt section of bobbyowsinski.com.
Kevin Killen is a great engineer with a host of big time credits (U2, Elvis Costello and Peter Gabriel, for instance) and he’s been much in demand as a mixer for a long time. When I wrote the first edition of The Mixing Engineer’s Handbook, Kevin was one of the mixers I most wanted to interview, and that interview is one of the best in the book.
Here’s a great video where Kevin explains how he mixes in the box, and how he applies his processing mostly to subgroups rather individual tracks, as well as the way he adds effects.
Usually I post an isolated track on Friday, but this is something that’s pretty close. In this video, engineer John Cuniberti uses a single stereo mic, in this case a AEA R88 stereo ribbon, to record the band San Geronimo – no overdubs, no additional mics.
For those of you who don’t know, John was the guy who came up with the idea of reamping, a technique and box that’s used every day in studios around the world.
The recording just goes to show how good it can actually sound when the band is placed at the right distances, know how to control their volume, and all come up with a good performance at the same time. Of course, having a great acoustic environment really helps as well (25th Street Recording in Oakland).