Monthly Archives: November 2016
Monthly Archives: November 2016
There are some guitars that are forever linked to certain musicians. Eric Clapton’s “Blackie” and “Brownie” Strats, Brian May’s home-built one-off, Neil Young’s “Old Black” Les Paul, and B.B. King’s “Lucille” ES355 are just a few that come to mind. But there is another that fits nicely into this category and deserves equal attention because of its backstory, and that’s Bruce Springsteen’s very unique Telecaster.
Like “Blackie,” Bruce’s Telecaster is a hybrid of parts collected from at least 2 other guitars. It’s a 50’s Telecaster body with what looks to be a 57 Esquire neck originally purchased from Phil Petillo’s Neptune NJ guitar shop for $185. That’s only part of the story though.
The Telecaster body was originally jury-rigged with four pickups wired into extra output jacks so that each could plug into a separate channel of a recording console. The thought behind this wasn’t so much for the sound, but so that the session player original owner could collect four times union scale for playing four slightly different versions of the same guitar part. As a result of the modification, a lot of the body underneath the pickguard was routed out for the extra electronics.
Petillo removed the extra pickups and returned the guitar to original Telecaster shape before he sold it Springsteen, but a huge side effect of the routing was that the Tele was now really light, giving it a sound a feel unlike any other (see the picture on the left).
Bruce wasn’t one to sit still with one version of the instrument however, and over the years had it significantly modified, all personally done by Petillo. He added his patented triangular Precision Frets, a six saddle titanium bridge, and custom hot-wound waterproofed pickups and electronics so they could better survive a sweat-soaked 4 hour show.
Bruce played the guitar in virtually every live show until around 2005, when the wear and tear of the road finally took it’s toll and the guitar was retired. He now plays clones of his original Tele on tour, but still uses his favorite when he records.
Now for really cool part. It’s been estimated that the guitar is worth anywhere between $1 million and $5 million, depending upon the collector that could manage to get his hands on it. For now, that’s not going to happen, since the Tele has been Bruce Springsteen’s partner for more than 40 years now, and that partnership shows no sign of ending.
If you’ve ever had to deal with a noisy track, you know how time consuming clean-up can be. Yes, you can gate it, but that can sound unnatural. There are also many excellent noise-reduction plugins available, but they don’t always do the job without adding some unpleasant artifacts. You find yourself playing with different combinations of tools hoping that you can eventually dial in something acceptable, which can take a lot longer than you’d expect. That’s why the new Audionamix Speech Volume Control (SVC) is so cool. It takes an entirely different approach to noise reduction.
Most noise control plugins try to eliminate or concentrate on the background noise, but SVC works by increasing the level of the vocal or speech so that the noise is no longer a problem. Keep in mind that this is a tool intended for post engineers who work with dialog, but the plugin looks like it can have many uses in music as well (I can’t wait to try it on noisy distorted electric guitars, for example).
SVC has to first acquire the audio in order to be able to work with it, and this done using the Aquire button, and then the Separate button to allow the algorithm to do its thing (the track can be mono or stereo, by the way). At that point, there are a variety of controls, but the main ones are the Speech volume slider and the Background volume slider, which will then allow the user to find the right combination of noise to track that seems natural. There’s also a Pitch slider, which works in conjunction with a number of presets for different types of voices, that allows you to fine-tune the algorithm to the voice.
There are also High Quality, HF Boost and Reverb options (the Reverb selector maintains the wet/dry balance as you adjust the Speech volume control), as well as an Automatic Voice Activity Detection that helps to precisely target the speech content within your audio file.
The Audionamix Speech Volume Control plugin costs $249 to buy and $19.95 for a 2 week rental. It’s available in AAX, VST and AU formats for both Mac and Windows platforms. Check out the video below, or the website for more info.
If you’re a hard-core Beatles fan then you’ll love this Beatles Bloopers video. If you’re not, you’ll still enjoy some of the humor involved when various members of the band screw things up on songs that you’ve heard hundreds of times. Here are a few other things to listen for though.
1. John Lennon’s voice is truly impressive. I don’t think I ever gave him credit for the range that he had back in the day.
2. The Abbey Road reverb is truly lovely. You’ll hear gobs of it here, and it played a large part in the sound of the band (and others of that era who recorded there too).
3. It’s cool to hear inside a few of the songs to how some of the parts were played, if even only for a second. The interplay between guitars is a little more obvious in places than in the final mixes.[Photo: Beeld en Geluidwiki – Gallery: The Beatles]
You might have noticed that in the last few years, the differences in level between television shows, commercials, and channels are pretty even, with no big jumps in volume. That’s because viewers were complaining for years about the fact that there was a dramatic increase in level whenever a commercial aired because it was so compressed compared to the program that you were watching. Congress set out to do something about this, and in 2012 adopted a method to normalize those volume jumps that the European Broadcast Union put into place the year before – Loudness Unit Full Scale or LUFS.
LUFS (called LKFS in Europe) is a way to measure the perceived loudness of a program by measuring both the transient peaks and the steady-state program level over time using an a specially created algorithm. It’s different from a normal meter in that it doesn’t represent signal level – it measures how loud we perceive an audio program to be. For a broadcaster this is actually pretty serious, since if a station violates the mandated LUFS level of -23, it could possibly lose its broadcast license.
Even though LUFS was intended primarily for broadcast audio delivery, it has a new increased meaning in music production as well, as you’ll see in the video below. Thanks to the fact that streaming services like Spotify and Apple Music are now normalizing the songs so the level is the same from tune to tune, there’s no real benefit for compressing a song to within an inch of its life any more. In fact, less volume and more dynamic range are actually your friend.
Using a LUFS meter allows you to optimize your music mixes for a variety of platforms to be sure that they’re always in the sweet spot for dynamic range.
Check out this video from MasteringTheMix that uses its LEVELS plugin to illustrate how this all works. Keep in mind that there are other LUFS meters on the market as well from TC Electronic, Waves or other developers.
Direct boxes are something that we use every day in recording, yet take for granted because of their simplicity. Here’s an excerpt from my Recording Engineer’s Handbook that looks at the ins and outs of this useful recording tool.
“Direct Injection (DI or “going direct”) of a signal means that a microphone is bypassed, and the instrument (always electric or electrified) is plugged directly into the console or recording device. This was originally done to cut down on the number of mics (and therefore the leakage) used in a tracking session with a lot of instruments playing simultaneously. However, a DI is now used because it either makes the instrument sound better (like in the case of electric keyboards) or is just easier and faster.
Why can’t you just plug your guitar or keyboard directly into the mic preamp without the direct box? Most preamps now have a separate input dedicated for instruments, but there was a time when that wasn’t the case and plugging an electric guitar (for instance) into an XLR mic input would cause an impedance mismatch that would change the frequency response of the instrument (although it wouldn’t hurt anything), usually causing the high frequencies to drop off and therefore make the instrument sound dull.
Advantages of Direct Injection
There are a number of reasons to use direct injection when recording:
Direct Box Types
There are two basic types of direct boxes; active (which can provide gain to the audio signal and therefore needs electronics requiring either battery or AC power), or passive (which provides no gain and doesn’t require power). Which is better? Once again, there are good and poor examples of each. Generally speaking, the more you pay the higher quality they are.
An active DI sometimes has enough gain to be able to actually replace the mic amp and connect directly to your DAW.
An excellent passive DI can be built around the fine Jensen transformer specially designed for the task (www.jensen-transformers.com for do-it-yourself instructions) but you can buy basically the same thing from Radial Engineering in their JDI direct box (see the figure on the left). Also, most modern mic pres now come with a separate DI input on a 1/4” guitar jack.
Direct Box Setup
Not much setup is required to use a direct box. For the most part, you just plug the instrument in and play. About the only thing that you might have to set is the gain on an active box (which is usually only a switch that provides a 10 dB boost or so) or the ground switch. Most DI’s have a ground switch to reduce hum in the event of a ground loop between the instrument and the DI. Set it to the quietest position.”
You can read more from The Recording Engineer’s Handbook and my other books on the excerpt section of bobbyowsinski.com.
Most mastering engineers start in recording before they transition into mastering, but Gene Grimaldi took a different route, beginning his career at Sony’s New Jersey CD pressing plant instead.
But Los Angeles called and Gene’s mastering journey began at the venerable Future Disc, from there eventually working his way up to chief engineer at Oasis Mastering. Along the way he’s lent his talents to big hit albums by Lady Gaga, Ellie Goulding, Niki Minaj, Ne-Yo and many, many more.
In the interview we cover everything from working production in a mastering studio (a big and expensive part of the job in the pre and early-digital days), to working exclusively in-the-box, and doing it without using limiters.
On the intro we’ll look at an upcoming hot controversy – Warner Music and Avenged Sevenfold going to court over California’s arcane “7 Year Rule.” I’ll also talk about evaluating monitor speakers and what to listen for.
If you’ve ever had something physically wrong with you yet the doctor wasn’t able to diagnose exactly what it is, you know how frustrating and time-consuming that can be. Soon there may be a new tool in the doctor’s bag that could make that situation a thing of the past – music. A new technology that transforms proteins into musical melodies might one day be used to help doctors diagnose disease more easily.
The researchers from the University of Tampere in Finland, Eastern Washington University and the Francis Crick Institute in London discovered that both music and genes contain both repetition and have a finite number of options – four base pairs in genes and twelve notes in music. If the proteins are changed into music, the melody could actually help tell a doctor just how physically well or unwell that you’re doing. The idea is that the ears might be able to detect more than the eyes.
The technique that the researchers came up with is called “sonification,” and the study was based around 3 main questions: what will data sounds like? Are there any benefits? And can we hear particular anomalies in the data?
The melodies were created using a combination of Dr Jonathan Middleton’s composing skills and algorithms, and when they were played to people, most were able to recognize the melodies then link them to visuals such as graphs and table, showing that hearing proteins was easier than expected.
Keep in mind that proteins are usually studied under a microscope, so this was a completely new way of looking at them, although work has been going on in this area since the 90s.
My big question is, will dissonance mean that a disease is present? Will this open up a new area of employment for musicians, who just study the sounds of the body? Either way, it’s a new and exciting use for music and audio.
Read the full paper of the study on Hellyon. Have a listen to what proteins sound like below.