Monthly Archives: December 2016
Monthly Archives: December 2016
When it comes to your drum sound, sometimes the smallest details can make a big difference when you consider that there are usually multiple mics involved. Changing one thing can sometimes make a difference, but sometimes it’s the fact that many small adjustments have a cumulative effective on the overall sound. Here are 7 tips culled from The Recording Engineer’s Handbook 3rd edition that can individually or together improve your recorded drum sound.
1. Microphones aimed at the center of the drum will provide the most attack. For more body or ring, aim it more towards the rim.
2. The best way to hear exactly what the drum sounds like when doing a mic check is to have the drummer hit the drum about once per second so there’s enough time between hits to hear how long the ring is.
3. Try to keep any mics underneath the drums at a 90 degree angle to the mic on top to keep the acoustic phase shift to a minimum.
4. Most mics placed underneath the drums will be out of phase with the tops mics. Switch the polarity on your preamp, console or DAW and choose the position that has the most bottom end.
5. Try to keep all mics as parallel as possible to keep the acoustic phase shift to a minimum.
6. The main thing about mic placement on the drums is to place the mics in such a way where the drummer never has to be concerned about hitting them.
7. The ambient sound of the room is a big part of the drum sound. Don’t overlook using room mics where possible.
The above tips can generally apply to just about any drum miking setup, but remember to listen carefully after each adjustment to note the difference, if any, that occurs, then make sure it fits with the track.
You can read more from The Recording Engineer’s Handbook and my other books on the excerpt section of bobbyowsinski.com.
There are stars, then there are superstars, then there’s Justin Bieber, who seems to shine brighter than the rest. On this week’s podcast, Justin’s studio partner Josh Gudwin will discuss what it’s like recording, mixing and even producing one of the biggest names in the entertainment universe.
On the intro we’ll take a look at how Instagram might be the best social platform to find music fans, if you have what they want. I’ll also discuss the latest in the DAW world, as so many of us look for an option to Pro Tools.
Simon Phillips is one of the best drummers in the world and has the resume to prove it. From Jeff Beck to The Who to Judas Priest to being a member of Toto and a prolific session drummer, Simon is well respected for not only his playing, but his drum and recording acumen as well.
Here’s a video from Simon’s studio in Los Angeles where he discusses his thoughts on drum miking and equalization. I especially liked the explanation of how he treated and miked his bass drums, and that comes at around 17 minutes into the video.
It should be noted that Simon has a fairly large kit with lots of toms and multiple snare drums, but the information he shares is pretty basic and works with a kit of any size.[Photographer: Mark Regemann, Germany (german user Jorainbo2001)]
We’re starting to see more and more next generation DAW plugins where outside the box thinking results in an easier user interface which ultimates leads to new or more useful sounds. Nowhere is that more evident than with the new FabFilter Pro-R Reverb plugin which takes many of the more difficult reverb concepts and controls and presents them in a new, easier to understand way.
FabFilter Pro-R has the same familiar interface as it’s other wonderful plugins, showing a real time waveform display, but this time with a decay time and EQ curve superimposed over the top. While there are many unique features of the plugin, one of the most striking and useful is the previously mentioned decay curve, which allows the user to simply grab and shape as necessary. This allows you to have different decay times for different frequencies, which while not totally unique, is presented here in a way that’s far easier to achieve the final result you’re looking for.
Another unique feature is the continuously variable Space control that lets the user fade between dozens of different room models and automatically chooses a matching decay time. Once again this is possible with most other reverb plugins, but the fact that as you dial up the Space control it automatically switches from algorithm to algorithm is not only pretty cool, but one of those “Why didn’t anyone think of that before?” features.
There’s also a Distance control that adjusts how close the source is in relation to the reverb, so you can bring things closer or further away as needed. Of course, on most reverbs you can do this by adjusting the first reflections, but this is so much easier. A Character control also changes the sound from a clean, transparent decay to one with that’s over-modulated for a chorus-like effect. This is one clever plug!
The FabFilter Pro-R reverb plugin costs $199 (EUR 169 or GBP 149), and supports both Windows and Mac OS X in VST and VST 3, Audio Units, AAX, RTAS, and AudioSuite plug-in formats. Check out the website for more details, or this excellent video that pretty much explains everything.
I usually post isolated tracks on Fridays and for the most part, the majority of them are classic songs that are somewhat old. The reason for that is that those tracks are more available, but every now and then I find something that’s current, like today’s One Direction isolated vocal track of their hit “You And I,” which was co-written and produced by Julian Bunetta and John Ryan. Here’s what to listen for.
1. Unlike most songs today that are somewhat dry, the vocals on “You And I” are deeply effected. There’s a basic long, very lush, delayed reverb that’s augmented by a 1/2 note and sometimes 1/4 note delay that trails its repeats to the left.
2. There’s a lot of compression on these vocals and sometimes it really stands out. That said, you’d never hear it in the track, and that’s what counts in the end.
3. Listen to the beginning of the choruses at 1:07 and 1:54 on the left (especially at 1:54 and a little beyond). There’s some throat clearing that was left in. This was something you heard a lot back in the old tape days, but hardly much any more in the world of DAWs. There’s also a lot of lip noise during the second verse at 2:32 on beyond. I’m surprised this wasn’t cleaned up. Likewise, there are some glitches around 4:46 and again around 6:30. Can’t tell if these are just digital artifacts from the upload or if on the recording. There’s even a bit of noise from the studio talkback left in.
4. There are some very abrupt cut-offs on some of these vocal tracks, which makes me think that the editing wasn’t as good as it could have been. Usually you put a slight fade at the end of an edit to eliminate that.
5. At 6 minutes and 54 seconds, this is a really long song in a time where shortness prevails. However, like other big hitmakers of the past, One Direction can break the current rules and even establish some new trends thanks to its huge fan base.
If you’ve studied audio technology at all then you know that loudspeakers/headphones and microphones are both principally the same in how they operate, they just work backwards from one other. Where the diaphragm of a microphone responds to moving air molecules to turn sound into an electronic signal, the loudspeaker turns an electronic signal into moving air molecules thanks to the motion of its diaphragm. We’ve used loudspeakers as mics in the past, most recently with the popular subkick on kick and bass, but now an Israeli company has found a way to turn headphones and earbuds into secret microphones to record the surrounding conversations.
Researchers at Israel’s Ben Gurion University have created a proof-of-concept exploit called “Speake(a)r,” that found that headphones were nearly as good as a microphone at picking up audio in a room. The hack is done by restasking the RealTek audio codec chip output found in many desktop computers from an audio output to an input. Apparently this is fairly easy to do, but hackers just haven’t it discovered it yet. The worst part is that it doesn’t even require a new driver, since the embedded chip has no security built into it and is easily reprogrammed.
Keep in mind that this is just a proof-of-concept, so no need to worry about your conversations or your audio tracks being compromise yet, but it does bring up a big question about the security of the everyday computer peripherals that we all use. Probably the last thing we ever think about is the cyber-security of our audio gear, but perhaps its time to be concerned.
What’s worse is the fact that audio professionals usually use higher-quality headphones than the average earbud listener, which means that the capture quality is better as well, although I’m not exactly sure the frequency response would be that good with closed-back headphones tightly fitted to the head. Then again, it’s pretty rare that matters of national security is discussed in a recording studio (unless you’re with Jeff “Skunk” Baxter). Still, it’s time to be aware that some of our everyday studio gear can be turned into secret microphones.