From Whale Songs to the Beatles: Computer Analysis of Musical Styles

Vocal communication of whales sounds like songs whales sing to each other. So if our algorithm was able to analyze songs made by whales, we started to wonder how well it could analyze songs made by humans. It just made sense to give it a try.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
The Beatles are seen performing, date unknown. From left to right: Paul McCartney, George Harrison, John Lennon, and Ringo Starr on drums. (AP Photo)
The Beatles are seen performing, date unknown. From left to right: Paul McCartney, George Harrison, John Lennon, and Ringo Starr on drums. (AP Photo)

Dr. Eric Schulman proposed the contention that "the purpose of science is to get paid for doing fun stuff if you're not a good enough programmer to write computer games for a living." I disagree with that statement. Science can be a good reason to do fun stuff also for those who have decent programming skills.

I have always had high interest in music and art, but unfortunately I am not creative or talented enough to actually create them. Being well aware of my limitations, I took the typical career path of those who really like something but are not so good at it: I went into criticism. But instead of becoming a critic myself, I developed an algorithm that turns the computer into an art and music critic, being able to analyze visual art such as pieces by Pollock and Van Gogh, or music such as that of the Beatles.

As in so many other research projects, the result was quite different from the initial objective, which was to analyze the vocal communication of whales. Many species of whales communicate by producing vocal messages that travel long distances underwater, but the nature of these calls is not well understood. To analyze large databases of whale sounds, I developed, with a graduate student, Carol Yerby, a computational method that "learns" and classifies different sounds made by whales. The calls were annotated manually by thousands of volunteers through a project called WhaleFM and then analyzed by our algorithm to map the different calls made by the whales. The algorithm works by extracting from each sound sample a comprehensive set of numerical descriptors reflecting the audio data, and then applying pattern recognition and statistical methods to analyze these numerical values and measure the similarity between each pair of sound samples.

When we asked the computer to visualize the similarities between the sounds of the different whales, we noticed something interesting. The dialects of whales that live in the same geographic locations where more similar to each other than to those of whales living in other locations. The results were consistent for both killer whales and pilot whales. So the algorithm showed that whales, just like humans, have different accents based on the geographic location they live in.

Vocal communication of whales sounds like songs whales sing to each other. So if our algorithm was able to analyze songs made by whales, we started to wonder how well it could analyze songs made by humans. The intersection between computing and the humanities is one of my research interests, so it just made sense to give it a try.

Together with graduate student Joe George, we started to explore how the audio-analysis algorithms could be used to analyze music. We applied the algorithm to the studio albums of several well-known popular music bands, and naturally we started with the Beatles. Surprisingly, the computer sorted the Beatles albums by their chronological order, although it did not have any information about them other than the audio data. So by just "listening" to the music and analyzing all albums, the algorithm was able to determine which album was released before which. It even identified that the songs on Let It Be were recorded before those on Abbey Road, although Let It Be was released later.

In a similar way we tested several other bands, such as ABBA, Queen, and U2, and the algorithm was able to deduce the chronological order of the albums automatically. In the case of U2, the algorithm detected just a mild change in the band's musical style during the late '80s and early '90s. For Queen the algorithm sorted the albums in almost perfect chronological order, but it also automatically separated Hot Space and subsequent albums from the previous albums, which agrees with the band's shift from their '70s musical style to their '80s sound.

One band that we were not able to analyze was Led Zeppelin. The algorithm was not able to produce anything sensible by analyzing the albums of that band, and it seemed that for the algorithm each album was a world in itself, separated from the other albums. For more recent music we attempted to analyze the studio albums of Taylor Swift, but the computer just clustered the albums together, without identifying musical differences between them. We used that experiment as a negative control, but our attempt to study modern music was terminated rather prematurely, after my student refused to perform analysis of the albums of Justin Bieber.

The main purpose of the algorithm was to provide a way of studying music in a quantitative fashion, but such algorithms might eventually have some practical uses, such as music discoverability. In the era of big data, computers will help by searching huge music databases and identifying the music we are likely to enjoy but would not have otherwise known about. When these algorithms become sufficiently smart, they will complete the transformation of the music-consumption culture, providing equal opportunity to all musicians to make their work accessible to their target audience without necessarily signing a contract with a mighty music label or making their way into radio station playlists.

Popular in the Community

Close

What's Hot