Music and the Brain: Neuroscientific and Musical Perspectives

Tuesday, February 10, 2009

Paul Lennard and Steve Everett

Paul Lennard (Neuroscience & Behavioral Biology) and Steve Everett (Music) opened the lunch meeting about music and the brain with four questions: (1) What is music? (2) Is music a language? (3) Does music have an adaptive value? And (4) What is culture’s impact on the meaning of music?

Lennard shared a graphical model of how the basilar membrane of the cochlea responds to Bach. Yet emphasized that there is still much that is unknown about how the brain processes sounds: for instance, the way the brain processes pitch. Pitch is our perceived highness or lowness of a sound. When different instruments play the A above middle C (with fundamental frequency of 440 Hz) each has a unique recognizable quality based on its fundamental and accompanying harmonics. In the case of the oboe, when playing the A above middle C, the instrument is actually producing the harmonics (880 Hz, 1320 Hz, etc), and only very little of the fundamental frequency. The brain fills in, reconstructs, this frequency.

Everett then asked What is music, and shared with us four minutes of Francis Dhomont’s Frankenstein Symphony. Dhomont cut apart and stitched back together music elements, much like Mary Shelley’s Dr. Frankenstein did. Is this music? Often the elements manipulated are not played by musicians but are recordings of various sounds, assembled to create acousmatic music, for which any natural sound is kosher.

For Walt Reed (ILA) Frankenstein sounded like “industrial noise, not music.” Music needs to have form and intent. Bob McCauley (CMBC), and Robert DeHaan (Division of Educational Studies) joined the debate, “Must the agency be human?” Or “Maybe not being random is enough, regardless of the agency?”

Everett responded citing John Cage: The experience of sound does not depend on the intent of the composer but on the openness of the perceiver. Therefore music can be bird-chirps, the sounds of the city traffic, or the sounds of a water-fall, as well as the sounds of a string quartet. John Snarey (Candler School of Theology) agreed. Music, he said, was a construction of his own ears.

Todd Preuss (Yerkes) raised the question of whether birds perceive bird songs as music. And this deepened the question about the perceiver. Cory Inman (Psychology Department) suggested that music evokes emotions with valence for the perceiver, and that for him, like for Reed, Frankenstein sounded as emotional as a sound track.

Does it have to do with culture? Organization of sounds, said Everett, is culturally determined. Exposure to new sounds and new organizations can extend what is music to an individual. This was his own experience as a first-time listener to Javanese music. But his experience can be extended to other cultures, which he introduced in a series of rhetorical questions: “Is Tibetan multiphonic chanting evocative to any listener, or only to those who understand its symbolism?” “Is the timbre of the violin, so much loved in the West, more beautiful than the sound of the Japanese Noh instruments?” “Is beauty the ability to evoke the sublime? For the Japanese ear, the bamboo flute, shakuhachi, is intended to sound like ‘the wind blowing through grass.’”

Everett answered. There is an optimal ratio between the familiar and the novel. Too novel is not evocative. Too familiar is boring. Mozart put many surprises into a context of familiar music. The Austrian Johann Hummel did not and was greatly liked by his contemporaries, but today we hardly remember him. His contemporary Beethoven, with his many novelties, had to wait for later times to be fully appreciated. Reed, relying on Kant, suggested a different distinction. Beethoven pushed music and the concept of beauty towards the romantic and Hummel towards the classic.

We should replace “beauty” with “meaning,” suggested Lennard and took us back to one of the opening questions, Is music a language? Darwin considered music as a protolanguage preceding human language. Brain imaging studies show that the more a person is trained in music, the more lateralized the processing of music in his or her brain is, typically favoring, like for language, the left hemisphere. Closer studies of individual voxels (voxel is the 3-dimensional brain-image analogue of the 2-dimensional pixel on a computer-screen) show that the same voxels get activated in processing words and pitch; but the level of activation varies. Studies of people with aphasia – linguistic impairments – add to the convergence of music and language by showing that aphasiacs are also impaired in processing music.

Speech and music then are mixed together and processed by the same apparatus. How do we cognitively separate them? Laura Namy (Psychology Department) observed that while the apparatus is the same, different systems are involved – the aesthetic, the cultural, and the limbic. To the latter Lennard dissented, informing us that the amygdala of the limbic system is not strongly activated during listening to music. Lots of brain activation goes on but mainly in regions that are linked with culture like the temporal and the prefrontal lobes.

Namy maintained her skepticism and Lennard moved to her area of expertise. Children between 7-9 months start losing sensitivity to syllables that are not included in their native language. Similarly children between 7-11 months undergo a filtering process of rhythms. While West-European children develop preference to 1:2 over 3:2, the reverse is observed with Balkan children.

What if one never heard music, asked Inman. And Namy shared the story of a former student of hers, who after a cochlear implant, which was optimized for processing speech, lost her former ability to listen to music, suggesting that they are processed differently. Is there a critical age for the cochlea implant? for exposure to music? Is there an age beyond which the sound is not music anymore? DeHaan reflected, maybe critical age can be used to define music by understanding what gets lost beyond the critical age.

Does music have any adaptive value? returned Lennard to one of the opening questions and reminded us that in The Descent of Man, Darwin spoke of music as mysterious, but speculated that it played a role in sexual selection. Namy hypothesized on the role of music in social bonding, like between mother and infant. And Jim Rilling (Anthropology Department) added the example of social bonding through music prior to going to war. The British anthropologist Robin Dunbar was cited: Music playa a role in the rituals of grooming.

What about rhythm? Maybe the bimanual drumming of primates precedes music, proposed Lennard. Responding to a question by Richard Patterson (Philosophy Department), Everett pointed that to listen to music one needs temporal units, and he pointed to the automaticity of recognizing rhythmical units: At the moment one enters a techno club, one starts moving to the rhythm. This is an expression of the embodiment of music. While so far we limited our discussion to music and the brain, we should not neglect the heart, whose beating serves as a frame of reference. Cross-culturally, people agree on slow and fast. They all have hearts that beat in the same range.

We got closer to the end of the hour. Have we come closer to answering the first question, “What is music?” Kevin McCulloch (Candler School of Theology) suggested a new criterion: Songs tend to stick in his head. By contrast, the electronic music, to which we listened in the beginning of the hour, does not. Schumann, Everett reminded us, was haunted by music stuck in his head and attempted suicide to liberate himself from his musical ghosts.

Ghosts or muses? Music is named after the muses. And we have just peeked into our brains and hearts, aesthetics and culture, to explore how the muses work. Our inquiry did not scare them away, and we were allowed to ask new questions and create new music. It seems the muses will continue to inspire us as composers and listeners, but the questions, the answers, and the music will keep changing.

About Shlomit Finkelstein

Shlomit Ritz Finkelstein earned her PhD in theoretical physics from Georgia Institute of Technology in 1987 and her second PhD from Emory University in 2009. After a successful career in computer science she was admitted to the PhD program at the Graduate Institute for the Liberal Arts at Emory, an interdisciplinary department in which she pursued her interest in the neurobiology of language. As a graduate student, she was the first blogger of the Lunch Series of the CMBC. Currently she is an adjunct professor at Emory’s psychology department.
This entry was posted in 2009 Archives. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *