Author Archives: Christian Botz-Zapp

Speaking Without Words

Hello family and friends,

As my time in Paris comes to a close, I look back on everything I have learned during these speedy four weeks. From analyzing primary articles to visiting the libraries of famous French neurologists, it has truly been an enlightening experience. Nevertheless, one of the hardest aspects of studying abroad has been the language barrier. Knowing only a handful of French phrases, I have had to use alternative methods of communication in a variety of social contexts. After spending ample time interacting with Parisians, I find myself growing less anxious in my daily exchanges with non-English speakers. Instead, I take comfort in the fact that nonverbal communication can be as effective, if not more effective, than verbal communication. Interested in the broad category of nonverbal communication, I took it upon myself to do a little more research. As it turns out, what I found relates to the grand field of neuroscience.

First off, let me start by asking what you think of when you hear the phrase “nonverbal communication”. Personally, I imagine someone using a simple combination of facial expressions and bodily gestures to convey meaning. However, after reading a new study on the phenomenon, I realize that the cognitive processes involved in nonverbal exchanges are quite complex. Let me explain.

In a study led by Alexandra Georgescu from the University Hospital of Cologne in Germany, researchers delved into two types of perceived human motion, movement fluency and movement contingency, and their relationship to nonverbal interactions (Georgescu et al., 2014). For reference, movement fluency is the quality of one’s motions. Movement contingency is coordinated patterns of movement between two people. Thus, fluency deals more with the individual while contingency depends on the interactive dynamic between two people. What is the importance of these terms? Well, through their experimental design, Georgescu et al. found that manipulating movement fluency and contingency changes our perception of the “naturalness” of a nonverbal social interaction. Looking into the neural correlates involved with this perception, Georgescu et al. hoped to learn more about the processes occurring in the brain during nonverbal social interactions.

Figure 1. The four experimental video conditions.

In order to study movement fluency and contingency in the context of nonverbal social interactions, researchers measured the brain activity of study participants as they watched virtual dyadic interactions, or interactions between a pair. By virtual, I mean experimenters presented a silent video showing two mannequins interacting with one another (Figure 1). The goal here was to evaluate the brain’s response to natural and unnatural movements committed by the mannequins during their interactions. By doing this, researchers hoped to determine the neural networks involved in perceiving motion during nonverbal exchanges. Two kinds of motional manipulations were used during presentation of videos. The first targeted motion fluency by altering the smoothness of each mannequin’s movements. Here, alterations resulted in mannequins making rigid, robot-like movements. The second targeted motion contingency by eliminating one of the mannequins and having a mirror image of the remaining mannequin take its place. Here, Georgescu et al. reasoned that mirrored movements of the one mannequin would be interactively meaningless and thus non-contingent. Four 10-second videos were used, each presenting a different combination of manipulated and non-manipulated movements (refer back to Figure 1). Participants watched the videos while their brain activity was monitored by a functional magnetic resonance imaging (fMRI) machine. After presentation of each video, participants were instructed to quickly rate the “naturalness” of the clip on a scale from 1 to 4, 1 being “very unnatural” and 4 being “very natural”. Georgescu et al. ran many trials with 28 participants to gather sufficient data.

So… what were the results?

Figure 2. AON activation in response to visualizing contingent movement patterns.

Georgescu et al. found that participants were sensitive to changes in both movement contingency and fluency, and that participants considered the interactions to be most “natural” when movement was both contingent and fluid. From the imaging results, researchers concluded that visualizing movement contingency engages a network known as the “action observation network”, or AON (Figure 2). The AON includes several brain regions including bilateral posterior superior temporal sulcus (pSTS), the inferior parietal lobe (IPL), the inferior frontal gyrus (IFG), the adjacent ventral as well as dorsal premotor cortices (PMv, PMd), and the supplementary motor area (Wow, those are pretty overwhelming names!). In contrast, visualizing rigid movements (manipulated movement fluency) activated a different network known as the “social neural network”, or SNN (Figure 3). The SNN comprises of the medial prefrontal cortex (mPFC), the posterior cingulate cortex (PCC), the temporoparietal junction (TPJ), and the adjacent pSTS (I promise there are no more scary words). Thus, these results suggest that the AON may be a key neural network in the understanding of social interactions. Meanwhile, the SNN might play a role in interpreting incongruences during social interactions. Relating back to my daily experiences here in Paris, it would seem that my AON is activated as I coordinate my movements with a French speaker in a nonverbal exchange. If he or she makes a movement I fail to interpret, my SNN most likely activates as I try to sort out the ambiguity. Voila! Science.

Figure 3. SNN activation in response to visualizing rigid movement fluency.

Although I had difficulty interpreting the study’s imaging data due to poorly labeled figures, I found this article to be extremely interesting. It considered the processes of nonverbal communication in a novel fashion while providing solid evidence for the differential roles of the AON and SNN in nonverbal social exchanges. It would be exciting to perform similar experiments using videos displaying specific social contexts. That way, we might learn if social context leads to differential brain activity.


Always a pleasure,




Georgescu AL, Kuzmanovic B, Santos NS, Tepest R, Bente G, Tittgemeyer M, Vogeley K (2014) Perceiving nonverbal behavior: neural correlates of processing movement fluency and contingency in dyadic interactions. Hum Brain Mapp.35(4):1362-78

Figures 1-3 are from Georgescu et al., 2014.

“Welcome” image was obtained using a Creative Commons search:


The Bridge Between Recollection and Inception

Dear family and friends,

The priority I set for myself in coming to Paris (besides academics of course, duh!) is sightseeing. This is my first time in Paris, which means every turn I take is a new location I have yet to explore. Thus, even with the unwelcome addition of jet lag and travel-induced dehydration I have already visited several attractions . Notable mentions include the Eiffel Tower, the Galeries Lafayette, the Arc de Triomphe, and the Notre Dame Cathedral. However, the most exciting place I have visited is one you may not recognize – a bridge named Pont de Bir-Hakeim. While I know absolutely nothing about the walkway’s historical significance, I do know one very important detail. It is the bridge that Leonardo Dicaprio and Ellen Page march across in Christopher Nolan’s 2010 blockbuster film Inception. By now you have probably figured that when I say “exciting” it is almost entirely subjective. Nonetheless, the bridge is quite picturesque and presents a great view of the Eiffel Tower.


View of the Eiffel Tower from Pont de Bir-Hakeim


Pont de Bir-Hakeim

When I first saw the bridge I was a few hundred meters away. It immediately looked familiar to me; however, I could not place in my memory where I had seen it before. After several minutes of contemplation: poof! Memories flooded my mind of the scene in Inception and the joy I felt after watching the film ten times. I was so delighted that I moved the bridge to the top of my to-do list and visited it several days later with some friends. My successful recollection got me thinking – how could I recognize the bridge before remembering specific details about it? It turns out neuroscience has an answer!


Standing in Leo’s footsteps (Photo by Chandler Lichtefeld)

To start, recognition memory is formally split into two categories: recollection and familiarity. Both occur in response to a previously experienced stimulus e.g. an event, person, or object. Recollection describes a person’s ability to retrieve specific details about the previously experienced stimulus. Familiarity is one’s feeling that the stimulus was previously experienced, without retrieval of explicit details. To simplify, think of recollection as “remembering” and familiarity as “knowing.” Recently, a group of researchers set out to clarify the neural correlates of each recognition process. Led by Dr. Jeffrey Johnson at the University of Missouri, researchers used functional magnetic resonance imaging (fMRI) to measure the brain activity of 20 participants during a memory task (Johnson et al. 2013). This memory task consisted of two parts: an encoding phase and a retrieval phase. During the encoding phase, word stimuli were presented visually to the participants. Words denoted single objects such as tools, animals, and food. Participants memorized the words by either putting them in a sentence (sentence condition) or associating their physical manifestation with an outdoor scene (scene condition). Basically, the encoding task required participants to use different methods to (hopefully) remember word stimuli. Next, the retrieval phase tested the participants’ ability to remember the previously presented word stimuli. Here, old word stimuli from the encoding phase and new word stimuli were presented on a neutral background. During presentation of words, participants could answer in several ways. Answering “R” meant that the subject remembered details about the word i.e. they remembered the sentence they made or the scene with which the word was associated. Therefore, answering “R” indicated that the subject could “recollect” details about the previous word. If unable to remember details, participants answered based on their confidence that the word was old or new. For example, answering with ”confident old” indicated that the subject was only “familiar” with the word.

So… what did the results show?

According to the imaging data, recollection-driven recognition activates different brain areas than familiarity-driven recognition. In other words, the mental processes behind recollection (remembering) are different than that of familiarity (knowing). Specifically, recollection (when participant sanswered “R”) activated the angular gyrus, left ventral parietal cortex, retrosplenial and posterior cingulate cortex, ventromedial PFC, bilateral hippocampus, and the bilateral posterior parahippocampal cortex. On the other hand, familiarity (when participants answered with “confident old”) activated the left intraparietal sulcus, precuneus, anterior cingulate, and dorsolateral PFC (Wow – that is a mouthful). Thus, according to this group’s rationale, it would seem that my initial recognition of Pont de Bir Hakeim was due to unique “familiarity” brain circuitry. As soon as I remembered details about the bridge, my “recollection” brain areas activated and brought forth memories of the movie’s bridge scene. Cool stuff, no?


Neural correlates of familiarity and recollection








Although the researchers failed to present some behavioral data due to too few trials, I thought that this study was well designed overall. They used copious background studies to support the rationale for their experimentation and produced results that clarify our current understanding of recognition-based memory. An interesting next step might be to examine latency time between familiarity and recollection in cases where one eventually remembers why a stimulus is familiar. Perhaps then I could understand why it took me a few minutes to recollect the bridge!


Until next time,




Johnson JD, Suzuki M, Rugg MD (2013) Recollection, familiarity, and content-sensitivity in lateral parietal cortex: a high-resolution fMRI study. Front Hum Neurosci 7:219.

All pictures were taken by myself and Chandler Lichtefeld (The picture of Leonardo DiCaprio and Ellen Page is a screenshot from my iTunes copy of Inception)

The brain activity figures were taken from the primary article by Johnson et al.