Category Archives for "Uncategorized"
The biggest thing that every musician and engineer lives in fear of is losing one’s hearing. To make matters worse, it happens naturally to all of us as we grow older, although its gradually enough that we can unconsciously compensate. That said, all it takes is a loud concert, or a sudden loud feedback, or cymbals in your ears on stage, and your ears will ring for days and some of your hearing may never return to the way it was. The good news is that there’s been a new breakthrough by research scientists at Harvard and MIT that might make permanent hearing loss a thing of the past.
We actually have two sets of auditory cells in our ears that are long and thin like hairs (see the photo on the left). Hair cells grow in bundles in the inner ear and are mostly concerned with balance. In the cochlea, the hearing organ deep in the ear canal, there are two kinds of specialized hair cells – outer hair cells that amplify pitch, and inner hair cells that convert sound into electrical signals sent to the brain. The problem is that each cochleae (one in each ear) has only about 16,000 hair cells, and once they’re damaged, they don’t regenerate – as in, gone for good!
That’s what happens with most mammals, but fish, birds, lizards and amphibians can have cochlear hair cells that die but can be regenerated in as fast as a few days. Coming back to mammals, mice and other small mammals also have these regeneration characteristics when they are newly born, and that’s where the researchers came in. The team managed to grow up to 11,500 hair cells from the stem cells of one baby mouse ear, which could bring hope to people of all ages already hard of hearing or getting that way.
These laboratory-grown hair cells aren’t perfect however, even though they appear to have many of the characteristics of actual inner and outer hair cells. They might not end up being fully functional, so the most immediate use for this new technique might be to create a large set of the cells to test drugs and to identify compounds that can heal damaged hair cells or regrow them and restore hearing.
Either way, that’s good news for just about everyone who depends on their hearing for a living. Now turn that music down!
2,500 years ago the Greeks composed songs where the voice was accompanied by the lyre, reed-pipes, and various percussion instruments. Much was already known about these instruments thanks to the descriptions, paintings from archaeological remains, which Greek scholars then used to establish the timbres and range of pitches the instruments produced. The songs however were lost to time until a few dozen ancient documents from 450BC inscribed with a vocal notation were found. These consisted of alphabetic letters and signs placed above the vowels of the Greek words, which scholars made out as an ancient form of sheet music. So what did this ancient Greek music sound like?
You can now hear David Creese from the University of Newcastle playing “an ancient Greek song taken from stone inscriptions and played on an eight-string ‘canon’ (a zither-like instrument) with movable bridges.” The tune is credited to Seikilos and is supposedly 100% accurate.
Believe it or not, the most difficult part of transcribing the ancient tunes wasn’t in the melody, it was in the rhythm. The secret was in the pattern of the words and phrases (which makes sense). There’s more about how they dissected the tuning and melody in the this article from the BBC World News.
I don’t see this or any other ancient Greek music hitting the charts any time soon, but it’s interesting to get a glimpse into the music of ancient history.
All of us know that animals respond to music (my cats Roger and Chloe love to hang with me when I mix), but are there certain genres that they like better? A new study by the University of Glasgow in conjunction with the Scottish Society for the Prevention of Cruelty to Animals found out that shelter dogs definitely had some music preferences. The dogs wore heart rate monitors and their cortisol levels (known as the “stress hormone”) were checked in order to find out how relaxed or agitated they were when different types of music was played.
The first thing that the study found is that shelter dogs in general have lower levels of stress when music is played. This is important because dogs find shelters very uncomfortable and are likely to react in ways that make them less likely to be adopted. Thankfully, music seems to relax them a bit.
Ah, but what kind of music? It turns out that the dogs responded more to soft rock and reggae than any other. Believe it or not, Motown got the most paws down, and heavy metal actually induced body shaking. Audio books, oddly enough, were also something that the pooches liked.
The tests were so successful that the Scottish SPCA decided to pump music into its shelters permanently.
The university is now conducting similar studies on cats, but the kitties aren’t being as cooperative as the pooches (no surprise there, as cat lovers know). Cats are averse to wearing heart monitors, so only the cortisol levels are being measured. There have been studies before, and music made especially for felines (like this recent album) created as a result, but in my experience the kitties seem to like music that’s soft and rhythmic, and aren’t too happy with anything loud and aggressive. Just like many humans, I’ve found.
Johann Sebastian Bach is generally considered to be one of the great classical composers, with compositions that exhibit a technical mastery of harmony and counterpoint. One of the things he excelled at was writing short polyphonic hymns known as chorale cantatas (he wrote over 300), which are short 4 voice pieces rich in harmony. As it turns out, computer scientists find these pieces very attractive because of their algorithmic-like structure. The problem is that even though you can teach a computer to compose using a similar algorithm, it’s never been particularly convincing. Until now.
Thanks to the work of Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris using the artificial intelligence of a machine they call DeepBach, they’re able to produce very convincing choral cantatas that even some pros think were composed by Bach himself.
Essentially, they trained DeepBach’s neural network by teaching it all 352 of Bach’s cantatas, then transposing them to other keys for a data set of over 2,500 chorales. The machines then does its thing and before you know it, it’s composed a cantata that’s contains so much of the Bach style that even many trained listeners believe it came from the great composer himself.
How much so? A study was launched with 1,600 people (400 were professional musicians or music students) who were asked to compare two different harmonies of the same melody, then determine which of the two harmonies sounded more like Bach. When given the music from DeepBach, about half thought it was the real thing. Keep in mind that when given an authentic Bach piece to listen to, only 75% thought it came from Bach.
This is actually a very interesting step forward not so much from a composition standpoint, but more about music analysis. Bach cantatas follow a very precise structure that most other music doesn’t adhere to, but as a producer, I look forward to the day when I can get a readout as to the inner workings of a hit so I can learn from it. Hopefully DeepBach is a step towards that.
Listen to what DeepBach came up with.
To all my friends and readers, I wish you a happy holiday. I truly appreciate your support and I am humbled and honored that you take the time out of your busy day to read my musings.
It’s a good day to take some time off, eat a nice meal, have a cocktail and watch some football, but don’t forget that it’s also a great day to see some live music as well!
If you’ve ever had something physically wrong with you yet the doctor wasn’t able to diagnose exactly what it is, you know how frustrating and time-consuming that can be. Soon there may be a new tool in the doctor’s bag that could make that situation a thing of the past – music. A new technology that transforms proteins into musical melodies might one day be used to help doctors diagnose disease more easily.
The researchers from the University of Tampere in Finland, Eastern Washington University and the Francis Crick Institute in London discovered that both music and genes contain both repetition and have a finite number of options – four base pairs in genes and twelve notes in music. If the proteins are changed into music, the melody could actually help tell a doctor just how physically well or unwell that you’re doing. The idea is that the ears might be able to detect more than the eyes.
The technique that the researchers came up with is called “sonification,” and the study was based around 3 main questions: what will data sounds like? Are there any benefits? And can we hear particular anomalies in the data?
The melodies were created using a combination of Dr Jonathan Middleton’s composing skills and algorithms, and when they were played to people, most were able to recognize the melodies then link them to visuals such as graphs and table, showing that hearing proteins was easier than expected.
Keep in mind that proteins are usually studied under a microscope, so this was a completely new way of looking at them, although work has been going on in this area since the 90s.
My big question is, will dissonance mean that a disease is present? Will this open up a new area of employment for musicians, who just study the sounds of the body? Either way, it’s a new and exciting use for music and audio.
Read the full paper of the study on Hellyon. Have a listen to what proteins sound like below.
Just about any music involves repetition, and we’ve proved by our listening habits over hundreds of years that we like it that way. It’s not just the songs, symphonies or operas that are so often built on patterns that repeat (like drumbeats, rhythms, melodies, or harmonic cycles), it’s also the fact that we love to listen to the same recording again and again. We don’t get bored by a part or a hook that we’ve heard before; our enjoyment may actually increase.
While so many techniques (like tension and release) are common among many art forms, repetition is something that’s singular to music. In Elizabeth Hellmuth Marguilis book entitled On Repeat: How Music Plays The Mind, she outlines how if you were to repeat a word phrase common in a hook of any pop song, after a while it begins to become just a collection of sounds and loses it’s meaning, something that songwriters unknowingly seem to take advantage of more and more. This is called “semantic satiation,” which is that moment when a phrase is overloaded through so much repetition that it slips out of the meaning-processing part of our brains.
But patterns of repetition aren’t just songwriter techniques. They’re invitations for listeners to participate. As Margulis puts it: “Repetitiveness actually gives rise to the kind of listening that we think of as musical. It carves out a familiar, rewarding path in our minds, allowing us at once to anticipate and participate in each phrase as we listen. That experience of being played by the music is what creates a sense of shared subjectivity with the sound, and – when we unplug our earbuds, anyway – with each other, a transcendent connection that lasts at least as long as a favorite song.” That could be the reason why we return again and again to listen to a song we love. We like the way that it plays us, rather than the way we play it.
In fact, a USC study found that songs that have the most repetition are the ones with the highest chart positions and are more likely to be hits. If you really want some good examples of this, just check out VH1’s 15 Most Repetitive Songs Of All Time and look at the number of times the hook is repeated for each. For all you songwriters, do you think you can break 100 repetitions in one song?
Repetition is part of our musical experience and it’s something to be embraced. It’s been with us a long time already, and looks like it will be with us for a long time in the future.
Thanks to Ryan Kairalla for having me on his Break The Business podcast. We talked primarily about succeeding as an independent artist, but strayed off into other subjects as well. On the podcast, Ryan also covered the legacy of boy band maker and fraudster Lou Pearlman. A very interesting listen!
Here’s a study that every gigging musician should know about. It appears that music can profoundly influence how we taste things, but most significantly, it’s beer it holds the most influence over. A study published in Frontiers in Psychology by Dr. Felipe Reinoso Cavalho from the Vrije Universiteit Brussel and KU Leuven, in collaboration with the Brussels Beer Project and the U.K. rock band the Editors, found that different sounds and sound levels can enhance or detract our sense of taste.
What’s most depressing is the fact that high decibel level music causes out taste perception to decrease, which isn’t exactly good for club and venue bar business.
The Brussels Beer Project produced “a porter-style ale with a medium body and an Earl Grey infusion that produced citrus notes, contrasting with the malty, chocolate flavors from the mix of grains used in production.” There were over 200 drinkers in the study, and they tasted the beer under three different conditions.
“The first group acted as a control and drank beer along with the bottle without a label, and did not listen to a specific song. The second group tasted the beer after seeing the bottle with the label, while the third group drank the beer presented with the label while listening to “Oceans of Light” off the band’s latest album. Prior to drinking, the group was asked to rate how tasty they believed the beer to be, and after, rate how much they enjoyed the drink.”
It turns out that the label on the bottle didn’t change the study group’s taste perception as much as the music did, as the third group indicated that they enjoyed the beer more than the other two groups.
A 2011 study also found that music could enhance the joy of drinking wine as well. The type of wine closely mirrored the type of wine that listened too at the time.
That proves it. Choose you music with your food wisely!
I know that it’s common to feel like you’ve just heard the loudest sound in the world when leaving your local club after a metal band does its thing, or maybe compare that to the launch of Saturn V vehicle, but neither of those are even close. According to a fascinating article in the FiveThirtyEight, the loudest sound in world ever measured occurred on the morning of August 27th, 1883 when the volcanic island of Krakatoa blew itself to bits. It was reported that the sound was heard over 2,800 miles away, and there’s some evidence that the sound actually traversed the globe multiple times.
While a Saturn V has been measured at 204dB during launch, Krakatoa measured 174dB from 100 miles away, still loud enough to pop your eardrums!
What actually travelled around the world was the infrasonic sound waves (below our hearing range) from the blast. The way this was spotted was similar to the way nuclear tests are monitored and seismologists check for earthquakes – microbarometers and low-frequency microphones. In fact, a most recent case centered around the Chelyabinsk meteor that exploded over southern Russia on February 15th, 2013. The nearest monitoring station was 435 miles away, yet measured an infrasound level that was still 90dB.
So can infrasonic waves kill you? It turns out that the Air Force has done tests and found that humans exposed to an infrasonic level of 110dB experience changes in their blood pressure and respiratory rates to where they get dizzy and lose their balance. A 1965 test found that the test subjects began to feel their chests moving without their control at around 151dB, and at that point, their lungs were being artificially inflated and deflated.
While certainly not on the level of the full frequency range explosion of Krakatoa, yes, super low frequencies can be a killer as well. Just remember that a new active volcano has now emerged where Krakatoa once stood. Best to keep 3,000 miles away.