Sound: The Music of the Universe

Since both our ears and eyes guide us in everyday life, shouldn't both senses also be useful for studying data sets?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Click here to read an original op-ed from the TED speaker who inspired this post and watch the TEDTalk below.

What is the Music of the Spheres? Is music a key to understanding our universe? This talk (Honor Harger's "A history of the universe in sound") underlines an emerging and exciting sensibility in the sciences and the sonic arts: the idea that sound can effectively illustrate aspects of phenomena under study. (We also see examples of this in Janna Levin's 2011 TEDtalk "The Sound the Universe Makes" and the work of Robert Alexander.) Since both our ears and eyes guide us in everyday life, shouldn't both senses also be useful for studying data sets?

Visualization is a cornerstone of research. It is key to understanding information. It has a long history, and we use it effortlessly. For example, how many people start their days by glancing at a newspaper's financial section graphs? Furthermore, when we watch science programs, such as NOVA, part of the fun is in getting our eyes wowed by the high tech graphics that are typically employed, and which make highly abstract concepts understandable.

Information can also be presented to the ears. Sometimes it is by necessity. The software program xSonify was created to allow a blind astronomer to study data sets. Interestingly, her sighted colleagues also use it, as they find that some patterns are more readily detected through listening than through looking. With our common emphasis on the visual, we tend to overlook the role that the ears play in our experience of environments, both real ones and multimedia simulations. As Honor's talk shows, our intuitive understanding of information is strengthened when both images and sounds are associated with it. Adding sound makes it more "alive".

Sonifications are a priority in a number of outreach/educational projects connected to the Berkeley Center for Cosmological Physics. One is Rhythms of the Universe, an upcoming multimedia presentation created by George Smoot and percussionist and former Grateful Dead drummer Mickey Hart. Another is a planetarium show on dark matter, which is currently beginning production by personnel at Berkeley as well as CERN in Geneva.

The creation of multimedia information displays brings up a number of design questions. Some approaches are more literal, others are more symbolic. Most of us are familiar with astronomical photographs, although we rarely see a raw photo: Filters are commonly used to highlight certain color regions. When electromagnetic waves are studied, they are typically analyzed and rendered in spectral form, so that individual frequency components can be studied. Other types of displays are more abstract. Those who study helioseismology (sun quakes) face the challenge of working with and describing the complex vibrations of our sun, which constantly quivers like a ball of jello. They commonly employ a diagram that shows which frequencies are at high energy depending on how many horizontal vibrational nodes are present. This image is very informative, and is completely symbolic -- nothing on the sun actually looks like this. [Image source: soi.stanford.edu]

2013-02-22-spherical_harmonic_degree.jpg

The distinction between literal and symbolic is somewhat akin to how one views a location on Mapquest. There is a choice of satellite view, a literal representation, or street view, a symbolic representation. Both have equal integrity. The one we choose simply depends on what type of information we need.

If we're rendering with sound, we can also choose between literal and symbolic approaches. In her talk, Harger shares intriguing examples of listening through the "lens" of radiowaves.

This approach makes intuitive sense, given our everyday experience of radio broadcasting.

Like those of Janna Levin and Robert Alexander, these are literal renderings of radio wave activity. The sonic results tend to resemble various flavors of filtered noise, with occasional patterns and anomalies that are indicative of some identifiable phenomena.

Symbolic renderings create other perspectives. Literal renderings are not always compatible with the capabilities of our auditory system. When data points are treated as audio samples and played back at audio rates (typically at 44100 values/second) quick changes are lost to us, as we can't hear fluctuations discretely at the millisecond level. If, instead, we treat the data points symbolically, for example as pitches, we are better able to "magnify" what we are listening to. The contours of a visual graph become a melody, and we can stretch its range and adjust its tempo and duration to suit our needs.

Thus, an illustrative sonification of the helioseismology graph might sound like this.

A more analytical example is a resynthesized pulsar signal, similar to the one played in this talk. Pulsar data sets describe amplitude changes of various electromagnetic frequencies. By transposing these light frequencies to proportionally related sound frequencies and apply the amplitude changes of the data to them, we get a sound that is similar rhythmically to the one heard in this talk, but that also has the harmonies among spectral components, a signature "chord" that is present in the data but is not apparent in the literal rendering.

Finally, the underlying rumble of everything, cosmic microwave background radiation, is fascinating to hear as a literal radio wave. In this case (as with Janna Levin), the sound is heard in literal form, transposed up by a large number of octaves. However, the radiation is typically studied in a spectral format. By transforming the spectrum to a sonification in which intensity values are remapped as pitches, we get a melody that unfolds in the same pattern that the spectral plot has when read from left to right.

While symbolic sonifications have not yet realized any major new insights, the practice is fairly new. Given scientists' increasing reliance on multimedia technologies to present and study their work, we have a strong hunch that it is only a matter of time before ear-tickling sonifications become as standard a component of research as eye-popping graphics. As Marcia Bartusiak points out in Einstein's Unfinished Symphony: Listening to the Sounds of Space-Time (Joseph Henry Press, 2000), new forms of information gathering have historically led to unexpected discoveries. Radio astronomy turned up pulsars and quasars. X-ray astronomy led to the discovery of black holes. If sonification is as common to the next generation of researchers financial graphs are to us today, there is no telling what types of discoveries this approach may ultimately bring.

The idea of Music of the Spheres dates back to the ancient Greek philosopher Pythagoras. Today, he is probably most familiar to us because of the right triangle theorem we learned about in middle school. But for Pythagoras, principles of triangles were just a single note in the grand musical work that was the cosmos, wherein ratios, harmony, and the heavenly bodies were united voices of the cosmic song. Today, we may think of Music of the Spheres as being more fanciful than scientific. Yet musical language appears frequently in astronomers' descriptions of space. In a recent post on exoseismology, the exploration of seismic activity in distant stars, the stars themselves are compared to musical instruments, with characteristic sound wave frequencies that give scientists information about their composition and size, as well as the presence of planets. The more we learn about the universe, the more we seem to rely on musical terminology to describe it. If the universe is, at some level, music, then it seems only natural that we should study it with musical tools of thinking.

Popular in the Community

Close

What's Hot