Author Archives: Samantha Distler

when senses combine

As we have studied the different sensory systems here in Paris, I personally have been the most intrigued by how those systems interact with each other. In everyday life, information from each sense frequently combines with and affects information from other sensory systems. Among other examples, studies have shown that the smell of food can affect its taste (Burdach et al., 1984), someone’s mouth movements can affect what we hear them saying (McGurk and MacDonald, 1976), and the form of a nonsense shape can predict what we will name that shape (Köhler, 1929).

The kiki-bouba task is one particularly famous version of this shape-symbolism example where participants are asked to match names with nonsense shapes. This version of the shape-symbolism task comes from a 2001 scientific study. In this paper, two scientists found that, when asked to match the names bouba and kiki to the shapes shown below, 95% of people called the jagged shape “kiki” and the rounded shape “bouba” (Ramachandran and Hubbard, 2001).

The “kiki” and “bouba” shapes
From Hamburg et al., 2017

In the years since this initial paper has published, many subsequent studies have been done to try to understand why so many people make the same sound-shape connections (e.g. Cuskley et al., 2015). Scientists believe that one reason many people label the nonsense shapes similarly is because the actual soundwaves mimic the rounded and jagged shapes of the letters in the names. (Ramachandran & Hubbard, 2001).

To match the sound of a word to what it looks like, our brains need to be able to integrate and compare auditory and visual information (Król and Ferenc, 2019). This ability is one example of what is known as multisensory integration. In general, multisensory integration is when information from more than one sensory systems is combined to create one unified representation (Stein and Stanford, 2008).

Multisensory integration begins when we are infants and continues to develop throughout childhood (Flom and Bahrick, 2007; Barutchu et al., 2009, 2010). Previous researchers have shown that multisensory integration is important for cognitive abilities like target detection, reaction time, and the development of other cognitive skills (e.g., Diederich and Colonius 2007; Lippert et al., 2007; Dionne-Dostie et al., 2015).

Previous research has also shown that multisensory integration is impaired in children with intellectual disabilities and individuals with autism spectrum disorder (Hayes et al., 2003; Oberman and Ramachandran, 2008). Interestingly, the kiki-bouba task is one of the ways researchers test for multisensory integration. Since the kiki-bouba task involves matching auditory information (the names) and visual information (the shapes), abnormal results can indicate multisensory integration problems.

In their recent study, Hamburg et al. used the kiki-bouba task to assess multisensory integration in adults with Down syndrome. Down syndrome occurs when someone has an extra copy of their twenty-first chromosome (for reviews, see: Antonarakis et al., 2004; Kazemi et al., 2016). It the most common genetic cause of intellectual disability in the world, but there it can affect a range of cognitive abilities (Asim et al., 2015). Many of the cognitive abilities that are impacted by Down syndrome involve brain structures that develop relatively late (Edgin, 2013). Since multisensory integration also develops throughout childhood, the authors predicted that this ability could be affected by Down syndrome.

Trisomy 21

To test this prediction, Hamburg et al. first evaluated participants with Down syndrome on several background questions about general cognitive ability and everyday adaptive abilities. Then these participants and typically-developing control participants completed the kiki-bouba task. The authors then calculated the overall correct response rate for both groups of participants. Based on the previous evidence, matching “kiki” to the pointy shape and “bouba” to the rounded shape was considered a correct answer.

The data showed that, among individuals with Down syndrome, the correct response rate on the kiki-bouba task was 72.5% compared to 90% in the typically developing age-matched controls. The authors therefore concluded that multisensory integration deficits are relatively common in individuals with Down syndrome. Additionally, for the participants with Down syndrome, the authors found that there was a significant relationship between individuals’ kiki-bouba task score and both their general cognitive ability score and their everyday adaptive abilities.

The authors found that individuals with Down syndrome who had lower scores for general cognitive ability and everyday adaptive abilities scored close to chance (correct response rates around 57%) while those with higher ability scores scored levels comparable to the typically developing controls. The authors concluded that sound-shape matching ability might be relatively common in the Down syndrome community but are mostly seen in individuals with lower cognitive abilities.

Personally, I enjoyed completing the kiki-bouba task in class as a fun example of multisensory integration. The idea of using this interesting task as an experimental test is exciting but, of course, there are limitations to this approach. Some studies suggest that, across different cultures, there may be differences in sound-shape mapping and other forms of multisensory integration (Bremner et al., 2013; Chen et al. 2016). These differences make the use of the kiki-bouba task as an experimental test concerning as cultural differences could confound results.

Furthermore, in the Hamburg et al. paper the authors noted that the decrease in correct response rate was primarily seen in individuals with Down syndrome who are categorized as severely intellectually impaired. As the authors acknowledge, it is hard to know how much of this effect is due to Down syndrome as opposed to severe intellectual impairments. These possible causes are especially hard to parse because there is little to no research about multisensory integration in individuals who have intellectual disabilities not due to Down syndrome. In the future, further research would have to be done with more precise control groups so that these factors could be dissociated.

While the study is far from conclusive, it is interesting to think about testing for multisensory integration in patients with cognitive conditions. In the future, understanding patients’ ability to combine information from their different senses could help medical professions better understand and support these individuals.

 

References

Antonarakis SE, Lyle R, Dermitzakis ET, Reymond A, DeutschS (2004). Chromosome 21 and down syndrome: from genomics to pathophysiology. Nat Rev Genet. 5:725–38.

Asim A, Kumar A, Muthuswamy S, Jain S, Agarwal S (2015) “Down syndrome: an insight of the disease”. J Biomed Sci. 22:41.

Bremner AJ, Caparos S, Davidoff J, de Fockert J, Linnell KJ, Spence C (2013) ‘Bouba’ and ‘Kiki’ in Namibia? A remote culture make similar shape– sound matches, but different shape –taste matches to Westerners. Cognition 126: 165– 172.

Burdach KJ, Kroeze JHA. and Koster EP (1984) Nasal, retronasal and gustatory perception: an experimental comparison. Percept. Psychophys., 36: 205—208.

Chen YC, Huang PC, Woods A, Spence C (2016). When “Bouba” equals “Kiki”: Cultural commonalities and cultural differences in sound-shape correspondences. Scientific Reports, 6:26681.

Cuskley C, Simner J, Kirby S (2015). Phonological and orthographic influences in the bouba-kiki effect. Psychological Research

Desai SS (1997). Down syndrome: A review of the literature, Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontology, 84(3): 279-285

Diederich A and Colonius H (2007). Why two ‘Distractors’ are better than one: modeling the effect of non-target auditory and tactile stimuli on visual saccadic reaction time, Exp. Brain Res. 179: 43–54.

Dionne-Dostie E, Paquette N, Lassonde M and Gallagher A. (2015). Multisensory integration and child neurodevelopment, Brain Sci. 5: 32–57.

Edgin J (2013). Cognition in Down syndrome: a developmental cognitive neuroscience perspective, WIREs Cogn. Sci. 4, 307–317.

Flom, R. and Bahrick, L. E. (2007). The development of infant discrimination of affect in multimodal and unimodal stimulation: the role of intersensory redundancy, Dev. Psychol. 43: 238–252.

Hamburg S, Startin CM, Strydom, A (2017). The Relationship Between Sound–Shape Matching and Cognitive Ability in Adults With Down Syndrome. Multisensory Research 30: 537–547

Hayes EA, Tiippana K, Nicol TG, Sams M, Kraus H (2003). Integration of heard and seen speech: a factor in learning disabilities in children, Neurosci. Lett. 351, 46–50.

Kazemi M, Salehi M, Kheirollahi M (2016). Down Syndrome: Current Status, Challenges and Future Perspectives. International journal of molecular and cellular medicine, 5(3), 125–133.

Köhler W (1929). Gestalt psychology. New York: Liveright

Król ME, Ferenc K (2019) Silent shapes and shapeless sounds: the robustness of the diminished crossmodal correspondences effect in autism spectrum conditions. Psychological Research 1-10.

Lippert M, Logothetis NK, Kayser C (2007). Improvement of visual contrast detection by a simultaneous sound, Brain Res. 1173: 102–109.

McGurk H, MacDonald JW (1976). Hearing lips and seeing voices. Nature. 264:746–748.

Oberman LM, Ramachandran VS (2008). Preliminary evidence for deficits in multisensory integration in autism spectrum disorders: the mirror neuron hypothesis, Soc. Neurosci. 3: 348–355.

Peiffer-Smadja N, Cohen L (2019). The cerebral bases of the bouba-kiki effect, NeuroImage, 186: 679-689,

Ramachandran VS, Hubbard EM (2001). Synaesthesia–a window into perception, thought and language. J. Conscious. Stud., 8: 3-34

Stein BE, Stanford TR (2008) Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci. 9:255–266.

 

Images

Bouba and Kiki Shapes: Figure 1 from Hamburg et al., 2017

Trisomy 21: https://upload.wikimedia.org/wikipedia/commons/a/ab/21_trisomy_-_Down_syndrome.png

Accents away from Accent

This weekend I went on a crazy, fun, whirlwind trip to London along with Shelby, Kendall, Jamie, Alyssa, and Merry. While we were only there for a day and half, we managed to see Buckingham Palace, Westminster Abbey, Big Ben, London Bridge, and most of the other major famous sites. As we raced all over the city in the underground, I kept accidentally saying “pardonne-moi” and “désolé” to everyone I bumped into. Only, for the first time in weeks, everyone around us was speaking English. But, even though we all speak English, the way that the locals around us pronounced words and phrases was still different than our own speech.

 

Of course, from the moment we arrived in England, we were sounded by English accents. Several of us found ourselves fascinated by these accents and, when we were safely out of earshot, we even did our best to imitate them. Yesterday morning as I sat on the train back to Paris, I decided to try to find out what it is about our brain that allows to recognize, use, and understand different accented versions of the same language.

Westminster Abbey

Determining exactly what parts of the brain allow us to understand unfamiliar accents is a difficult task, but there is a growing body of research on this topic. Many of the studies on accent comprehension use functional magnetic resonance imaging (fMRI) to detect changes in brain activity and as subjects listen to sounds or sentences in different accents (Ghazi-Saidi et al., 2015).

A recent review of this research and found that other researchers have identified areas like the left inferior frontal gyrus, the insula, and the superior temporal sulci and gyri as having higher activity when listening to accented speakers produce sounds (Callan et al., 2014; Adank et al., 2015).Interestingly, many of these brain areas are the same regions that have been identified as important for understanding foreign languages (Perani and Abutalebi, 2005; Hesling et al., 2012).Some of these areas that are important for understanding unfamiliar accents – including the insula, motor cortex and premotor cortex – have also been implicated in the production of these accents (Adank et al., 2012a; Callan et al., 2014; Ghazi-Saidi et al., 2015). 

Investigating the production of accented speech is also an exciting field of study. Interestingly, one of the main ways we have learned about accent production is through case studies of patients with Foreign Accent Syndrome (FAS). FAS is a fascinating motor speech disorder where patients speak in a different accent than they originally used, typically following brain damage (Keulen et al., 2016). This condition was actually first identified here in Paris by Pierre Marie¹, a French neurologist (Keulen et al., 2016). After recovering from a brain hemorrhage, Marie’s patient had an Alsatian French accent instead of his original Parisian one (Marie, 1907). Since then, nearly 200 cases of this rare disease have been identified (Mariën et al., 2019).

Pierre Marie

However, it is hard to draw conclusions from individual case studies with just one patient. In a recent metanalysis (a procedure where data from other studies is combined and analyzed), Mariën et al. looked at 112 different published cases of FAS to draw larger conclusions about this rare disease. The authors were particularly interested in cases of FAS that occurred after a stroke, but they analyzed case studies from patients with all different kinds of brain damage.

To review these cases, Mariën et al. first compiled published case studies that reported the cause and symptoms of a patient’s FAS from Pierre Marie’s case in 1907 through October 2016. They then calculated and analyzed the demographic, anatomical, and symptomatic features of these FAS patients to look for larger trends across the different cases.

The authors found that there are statistically significantly more female patients (68% of cases) than male patients in these 112 FAS cases. Additionally, a significant and overwhelming majority (97%) of cases were in adults. In more than half the patients (53%) FAS was present following a stroke.

For those patients who developed FAS following a stroke, the authors also analyzed where in the brain their vascular damage was. The most commonly damaged brain areas (60% of vascular FAS patients) were the primary motor cortex, premotor cortex and basal ganglia which are all important for the physical ability to produce voluntary speech (Brown, Schneider, & Lidsky, 1997). The authors also found that 13% of these vascular FAS patients had damage in the insula, an area that has also been identified as important for accented speech production in studies of healthy subjects (Ghazi-Saidi et al., 2015).

The Insula

I think FAS is a fascinating disorder, but is important to remember that, like any case studies, these reports have a limited ability to tell us about how healthy people produce accented speech. The naturally occurring brain damage in these FAS patients is not necessarily localized, and other brain areas besides for the primary lesion location could have been affected by the damage. Furthermore, there are some cases of psychological (as opposed to neurological) FAS which complicates our understanding of the onset of this disease (Keulen et al., 2016).

While there is still a lot to learn about understanding how we construct and comprehend accented speech. Studies of FAS patients, particularly large metanalyses like this one, have just begun to identify some of the key brain areas that are reliably indicated in accent production. These findings provide a good starting point for future researchers to analyze these brain areas further and possibly study their role in healthy patients’ accents, which can help us all understand each other a little better.

 

Footnotes

1 – As a side note for my NBB 301 classmates: Pierre Marie is the “Marie” in Charcot-Marie-Tooth disease, a glial disease that affects Schwann cells. He was also a student of Jean-Martin Charcot and was one of the people depicted in the famous painting A Clinical Lesson at the Salpêtrière that we saw at the Musée de l’Histoire de la Médecine today.

 

Images

Westminster Abbey: taken by me

Pierre Marie: https://upload.wikimedia.org/wikipedia/commons/thumb/a/a4/PierreMarie.jpg/230px-PierreMarie.jpg

Insula: https://upload.wikimedia.org/wikipedia/commons/b/b4/Sobo_1909_633.png

 

References

Adank P, Davis M, Hagoort P (2012a). Neural dissociation in processing noise and accent in spoken language comprehension. Neuropsychologia50, 77–84. 

Adank P, Nuttall HE., Banks B, & Kennedy-Higgins D (2015). Neural bases of accented speech perception. Frontiers in human neuroscience9, 558. doi:10.3389/fnhum.2015.00558

Brown L, Schneider JS, & Lidsky TI (1997). Sensory and cognitive functions of the basal ganglia. Current Opinion in Neurobiology, 7, 157–163.

Callan D, Callan A, & Jones, JA (2014). Speech motor brain regions are differentially recruited during perception of native and foreign-accented phonemes for first and second language listeners. Frontiers in neuroscience8, 275. doi:10.3389/fnins.2014.00275 

Ghazi-Saidi L, Dash T, Ansaldo AI (2015). How native-like can you possibly get: fMRI evidence in a pair of linguistically close languages, special issue: language beyond words: the neuroscience of accent. Front. Neurosci. 9:587.

Hesling I, Dilharreguy B, Bordessoules M, Allard M. (2012). The neural processing of second language comprehension modulated by the degree of proficiency: a listening connected speech FMRI study. Open Neuroimag. J. 6, 1–11.

Keulen S, Verhoeven J, De Witte E, De Page L, Bastiaanse R, & Mariën P (2016). Foreign Accent Syndrome As a Psychogenic Disorder: A Review. Frontiers in human neuroscience, 10, 168.

Marie P (1907). Un cas d’anarthrie transitatoire par lésion de la zone lenticulaire. In P. Marie Travaux et Memoires, Bulletins et Mémoires de la Société Médicale des Hôpitaux; 1906: Vol. IParis: Masson pp. 153–157.

Mariën P, Keulen S, Verhoeven J (2019) Neurological Aspects of Foreign Accent Syndrome in Stroke Patients, Journal of Communication Disorders, 77: 94-113,

Perani D, Abutalebi J (2005). The neural basis of first and second language processing. Curr. Opin. Neurobiol. 15, 202–206.

hearing voices

While difficult, trying to retroactively diagnose Vincent Van Gogh was by far my favorite journal prompt. My group and I eventually decided that, based on the evidence we examined, Van Gogh most likely had schizophrenia. The Diagnostic and Statistical Manual of mental disorders (DSM-5) is a list of psychiatric conditions and their symptoms that helps professionals diagnose patients. It includes criteria to help diagnose schizophrenia today. For symptom-based identification it instructs that schizophrenia patients are expected to exhibit catatonic behavior, negative symptoms, delusions, disorganized speech, and hallucinations (American Psychiatric Association, 2013). Van Gogh showed many of these symptoms but the one that most clearly pointed to schizophrenia was his hallucinations.

According to the note from the Director of the St Rémy mental home, Vincent Van Gogh exhibited both visual and auditory hallucinations (Van Gogh Museum, 2016). The importance of hallucinations in both his life and the diagnosis of schizophrenia made me wonder about their underlying biological mechanisms. I was particularly intrigued by the idea that patients sometimes hear voices talking to them when no one else is there. The idea of “hearing voices” may be familiar from Hollywood’s portray of mental illness, but what actually drives these hallucinations?

In the scientific community, this phenomenon is known as auditory verbal hallucinations. One major theory is that these hallucinations are a result of malfunctions in the brain systems that monitor inner speech. This idea is that, when these brain systems are impaired, people misinterpret their own internal dialogue as the speech of someone or something outside of them (Catani and Ffytche, 2005). While this theory has been around for decades, there are still many unanswered questions about the specific biology and brain areas that are associated auditory verbal hallucinations.

Auditory verbal hallucinations are when patients
believe they hear voices speaking to them

A recent study by Cui et al. investigated the neuroanatomical differences that may be connected to this type of hallucination. The authors studied healthy control patients as well as a large population of schizophrenia patients who did and did not exhibit auditory verbal hallucinations from hospitals across China. The patients they gathered is an important aspect of this study because previous work had only compared schizophrenia patients with hallucinations to healthy controls. Here, the researchers wanted to specifically investigate what neuroanatomical difference leads to auditory verbal hallucinations, so it was important for them to look at schizophrenia patients that did not experience these hallucinations as well as those that did.

Once the authors had gathered this group of patients and controls, they used a magnetic resonance imaging (MRI) scanner to get a structural image of the subjects’ brains. They then used a computer software program to compute the thickness of the subjects’ cortex, the brain’s outer layer.In particular, these researchers were interested in measuring and comparing the thickness of the middle temporal gyrus (MTG).

The middle temporal gyrus (MTG)

Previous scientific studies have indicated that the MTG may be important for the monitoring of inner speech and is often less activated in schizophrenic patients (Shergill et al. 2000; Seal et al. 2004). The function and development of the MTG is well-suited for it playing a role in auditory verbal hallucinations. First, the MTG is involved in brain pathways that make it important for interpreting certain sounds we hear, especially processing language (Cabeza and Nyberg, 2000). The MTG is also unique in the way it develops. This area of the brain develops relatively late in life (Gogtay et al. 2004). This makes sense for hallucinations associated with schizophrenia, which is a disease known to be associated with brain development that often doesn’t appear until patients are around 30 years old (Lewis and Levitt, 2002).

Previous studies had shown that the volume of the MTG is smaller in schizophrenic patients than it is in healthy people (McGuire et al., 1995). The point of this study was to test if that reduced size was associated with schizophrenia in general or auditory verbal hallucinations specifically.  When Cui et al. calculated the volume of the subjects’ middle temporal gyrus they found that it was significantly smaller in schizophrenia patients that had auditory verbal hallucinations than patients that did not. They also found that there was not a significant difference between the schizophrenia patients that did not have hallucinations and the healthy controls. These results suggest that a thinner MTG is not only connected to schizophrenia but is specifically associated with schizophrenia patients that experienced auditory verbal hallucinations.

Starry Night, a famous Van Gogh painting some
believe is the result of his hallucinations

While this new study offers great evidence comparing schizophrenia patients with different symptoms, there is still a lot to figure out about this kind of hallucination. Scientists are still working to discover what exact processes lead to cortical thinning and how those processes begin. However, what we do know about auditory verbal hallucinations emphasizes how heavily we rely on our perception of the world around us. We will not ever get to know the thickness of Vincent Van Gogh’s MTG, but the auditory hallucinations Van Gogh experienced were probably the result of his hearing system malfunctioning in some way. Today, many people believe that some of Van Gogh’s most famous decisions and artworks were informed by his hallucinations (Jones, 2016; New York Times Archive, 1981). Modern neuroscience tells us that those hallucinations may have actually been an erroneous interpretation of his own inner dialogue all along. 

 

References

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: Author.

Binney RJ, Parker GJ, Ralph MAL (2012). Convergent connectivity and graded specialization in the rostral human temporal lobe as revealed by diffusion-weighted imaging probabilistic tractography. Journal of Cognitive Neuroscience 24, 1998–2014.

Catani M, Ffytche DH (2005). The rises and falls of disconnection syndromes. Brain 128, 2224–2239.

Cabeza R, Nyberg L (2000). Imaging cognition II: an empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience 12, 1–47.

Cui Y, Liu B, Song M, Lipnicki D, Li J, Xie S, . . . Jiang T. (2018). Auditory verbal hallucinations are related to cortical thinning in the left middle temporal gyrus of patients with schizophrenia. Psychological Medicine, 48(1): 115-122

Jones, J. (2016). Vincent van Gogh: Myths, madness and a new way of painting. Retrieved from https://www.theguardian.com/artanddesign/2016/aug/05/vincent-van-gogh-myths-madness-and-a-new-way-of-painting

Gogtay N, Giedd JN, Lusk L, Hayashi KM, Greenstein D, Vaituzis AC, Nugent TF, Herman DH, Clasen LS, Toga AW, Rapoport JL, Thompson PM (2004). Dynamic mapping of human cortical development during childhood through early adulthood. Proceedings of the National Academy of Sciences 101, 8174–8179.

Lewis DA, Levitt P (2002). Schizophrenia as a disorder of neurodevelopment. Annual Review of Neuroscience 25: 409–432.

McGuire PK, David AS, Murray RM, Frackowiak RSJ, Frith CD, Wright I, Silbersweig DA (1995) Abnormal monitoring of inner speech: a physiological basis for auditory hallucinations. The Lancet, 346(8975): Pages 596-600,

New York Times Archive (1981) Van Gogh’s Hallucinations. Retrieved from https://www.nytimes.com/1981/07/07/science/science-watch-van-gogh-s-hallucinations.html

Seal ML, Aleman A, McGuire PK (2004). Compelling imagery, unanticipated speech and deceptive memory: neurocognitive models of auditory verbal hallucinations in schizophrenia. Cognitive Neuropsychiatry 9, 43–72.

Shergill SS, Brammer MJ, Williams SCR, Murray RM, McGuire PK (2000). Mapping auditory hallucinations in schizophrenia using functional magnetic resonance imaging. Archives of General Psychiatry 57, 1033–1038

Van Gogh Museum (2016). Shortly before 27 February 1889 In Concordance, lists, bibliography (Documentation). Retrieved from: http://www.vangoghletters.org/vg/documentation.html

 

Images: 

https://search.creativecommons.org/photos/71b807e7-29fd-445d-95a1-4d282ccf02e5

https://upload.wikimedia.org/wikipedia/commons/thumb/f/f5/Gray726_middle_temporal_gyrus.png/250px-Gray726_middle_temporal_gyrus.png

https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/757px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg

Lost in the gardens of Versailles

Like a lot of students, I used this past weekend as a chance to visit places easily accessible from Paris. On Saturday, Jamie, Genevieve, and I boarded the RER-C and headed to see the Château de Versailles. We went straight to the gardens, a gorgeous, intricate maze of hedges filled with sculptures and fountains. After aimlessly wandering far into the gardens, we heard classical music playing. We knew that it must have been the soundtrack to one of the weekend fountain shows, when the flow of water is set to follow the rhythm of music.

Map of the palace and gardens of Versailles

Just moments after the music started playing, we managed to find our way to the fountain show just by following the sound of the music. The water spouted and spun to the rhythm of the music and the slight mist was refreshingly cool in the middle of a hot summer day. Later that day, I was fascinated by how we managed to find the fountain show based only on the sound of the music. We had never been to the gardens before, we did not have a map, and we did not even know where we were going.

The fountain show

While it is often unconscious, determining where a sound is coming from is a remarkable ability. Figuring out where a sound originates can help us with everything from avoiding oncoming traffic to turning towards our friends in a crowded room (Dobreva et al., 2011; Middlebrooks JC, 2015). Part of this ability comes from having two ears because we can decide which direction a sound is coming from by comparing what we hear in our left and right ears (Joris and Yin, 2007).

However, in our daily lives, there are a lot of hidden challenges that make this task harder. Sound waves from a single source bounce off people and objects and ultimately hit our ears from several angles and directions.Both the unhindered sound itself and the reflections of that sound eventually reach our ears. The sound itself is known as the lead because, since it follows the most direct path, it hits the ear first (Brown et al., 2015). The ability to respond to the lead and not the subsequent reflections of that sound (the lags) is known as the precedence effect (Wallach et al. 1949).

The precedence effect is crucial for the accuracy of our sound localization. As we walked through the gardens of Versailles our ears were struck by soundwaves from both the music itself and reflections of the music bouncing off the greenery. The precedence effect is what allowed us to fuse these sounds together and find the fountain show rather than accidentally ending up at a tree the music had bounced off.

While theories about the precedence effect have been around for decades, the biological mechanisms underlying it were still unclear. Some scientists argued that this effect occurred within the brain while the sounds were being processed through synaptic inhibition (Pecka et al. 2007; Xia et al. 2010). Synaptic inhibition is when interconnected neurons send excitatory and inhibitory signals to amplify certain information (here, the lead sound) and depress other information (the lagging sounds).

Other scientists have argued that the effect could be a mechanical result of the cochlea, the inner ear where sounds are converted to electrical signals. These researchers note that once the lead hits the cochlea that sound continues to resonate. They contend that the lag cannot be communicated as strongly because the cochlea is still passing information about the lead (e.g. Bianchi et al., 2013).

In the past, there was limited evidence to supported one of these ideas over the other because it is technically difficult to impair the auditory structures of the ear or the auditory areas of the brain without impairing both. In their recent work, Brown et al. examined these theories by comparing normal hearing subjects to deaf subjects with cochlear implants, which directly stimulate the brain in response to sound. These deaf subjects had two implants and could still perceive sounds on either side of them but did not have functioning cochlea.

A cochlear implant

The researchers exposed subjects to lead-lag pairs of stimuli that mimicked a sound and the reflection of that sound. They asked subjects to indicate if they heard one sound or two and where the sound(s) originated. In normal hearing subjects these pairs were acoustic clicks. In deaf patients, they were electrical impulses sent directly into the cochlear implants. To measure the precedence effect, the researchers measured the subjects’ ability to recognize the two stimuli as one sound (termed “fusion”) and to determine the origin of the sound (“localization dominance”).

The authors found that both normal hearing and deaf patients could fuse the paired stimuli together and perceive them as one sound, although this ability was marginally weaker in the deaf patients. Furthermore, while there were idiosyncratic differences between individuals, whether subjects had cochlear implants did not affect their ability to determine the origin of sounds. This study presents evidence that people without cochlea can demonstrate the precedence effect at about the same levels as people with normal hearing. Since this effect can be seen in subjects without cochlea, this effect cannot be due to the mechanical features of the cochlea. This suggests that features of auditory neurons can account for the precedence effect that allows us to accurately localize sound.

This was a clever study but of course it is important to remember that it is not conclusive. There were small differences in deaf subjects’ ability to fuse the stimuli into a single sound, which could indicate that the cochlea at least contributes to the precedence effect. Also, mechanical aspects of structures beside the cochlea could be crucial. While this study is not conclusive it does highlight the importance of synaptic inhibition. This provides a launching pad for the continued study of the biological mechanisms underlying the precedence effect, which could help with everything from more immersive virtual reality to better treatment for hearing loss.

References

Bianchi F, Verhulst S, Dau T (2013). Experimental evidence for a cochlear source of the precedence effect. Journal of the Association for Research in Otolaryngology JARO, 14(5):767–779.

Brown AD, Stecker GC, Tollin DJ (2015) The Precedence Effect in Sound Localization JARO, 16(1): 1-28

Brown AD, Jones HG, Kan A, Thakkar T, Stecker GC, Goupell MJ, Litovsky RY (2015). Evidence for a neural source of the precedence effect in sound localization. Journal of neurophysiology 114(5): 2991–3001

Dobreva MS, O’Neill WE, Paige GD (2011). Influence of aging on human sound localization. Journal of neurophysiology, 105(5): 2471–2486

Joris P, Yin TCT, (2007) A matter of time: internal delays in binaural processing, Trends in Neurosciences, 30(2): 70-78

Middlebrooks, JC (2015) Chapter 6 – Sound localization, Handbook of Clinical Neurology, 129: 99-116.

Pecka M,  Zahn TP, Saunier-Rebori B, Siveke I,  Felmy F, Wiegrebe L,  Klug A, Pollak GD, Grothe B (2007) Inhibiting the Inhibition: A Neuronal Network for Sound Localization in Reverberant Environments Journal of Neuroscience, 27(7):1782-1790

Wallach H, Newman EB, Rosenzweig R (1949) The precedence effect in sound localization. Am J Psychiatr 62:315–336

Xia J, Brughera A, Colburn HS, Shinn-Cunningham B (2010). Physiological and psychophysical modeling of the precedence effect. Journal of the Association for Research in Otolaryngology : JARO, 11(3), 495–513

 Diagram

Cochlear Implant: “Ryan-Funderburk-1.jpg” by Rfunderburk90 is licensed under CC PDM 1.0