Is there a Place for Cognitive Style in Contemporary Psychology and Neuroscience? Issues in Definition and Conceptualization

Wednesday, November 20, 2013

Maria Kozhevnikov

The second CMBC lunch talk of the semester featured a presentation by Dr. Maria Kozhevnikov (Radiology, Harvard School of Medicine; Psychology, National University of Singapore), who has been a visiting scholar of the CMBC this fall. Dr. Kozhevnikov offered a critical perspective on the current state of cognitive style research within different research traditions, such as cognitive neuroscience, education, and business management. Kozhevnikov opted to treat the lunch talk as an opportunity for group discussion, which promoted a lively dialogue among the participants.

Traditional research on cognitive style began in the early 1950’s and focused on perception and categorization. During this time, numerous experimental studies attempted to identify individual differences in visual cognition and their potential relation to personality differences. The cognitive style concept was thereafter used in the 50’s and 60’s to describe patterns of mental processing which help an individual cope with his or her environment. According to this understanding, cognitive style referred to an individual’s ability to adapt to the requirements of the external world, given his or her basic capacities. Researchers tended to discuss cognitive style in terms of bipolarity—the idea that there are two value-equal poles of style dimensions. For example, a host of binary dimensions were proposed in the literature, such as impulsivity/reflexivity, holist/serialist, verbalizer/visualizer, and so on. No attempt was made, however, to integrate these competing style dimensions into a coherent framework. By the late 1970’s, a standard definition referred to cognitive style as “a psychological dimension representing individual differences in cognition,” or “an individual’s manner of cognitive functioning, particularly with respect to acquiring and processing information” (Ausburn & Ausburn, 1978).

Kozhevnikov questioned the usefulness of such definitions and pointed to a general lack of clarity with regard to how the term “cognitive style” was employed in early research. Moreover, several participants at the lunch talk noted further problems with the idea of cognitive style. For instance, if the notion of “style” is a distinct cognitive category, how is it different from basic abilities or strategies? What does the concept of style add in this respect? While it is obvious that there exist individual differences in cognition, it was notoriously difficult to determine exactly how styles differed from intellectual and personality abilities. On account of these conceptual difficulties, among others, cognitive style research fell out of favor and virtually disappeared after the late 1970’s, to the point where even mentioning the term in psychology and neuroscience settings has become taboo.

The concept of cognitive style lived on, however, in the field of education, where it quickly became associated with the idea of learning styles. Kolb (1974) defined learning styles as “adaptive learning modes,” each of which offers a patterned way of resolving problems in learning situations. The idea of individual learning styles in turn gave rise to the so-called “matching hypothesis”—the suggestion that students learn better when their learning style is aligned with the style of instruction. Although the hypothesis appeared reasonable, it has not found empirical support; studies have not been able to establish that aligning teaching with student styles confers a discernable benefit. It is worth noting, however, that this observation does not rule out the existence of learning style altogether. Kozhevnikov asked us to consider the martial artist Bruce Lee, who when asked what fighting style is best, responded that the best fighting style is no style. The point is that it pays to be flexible, that is, to be able to use different styles—in either fighting or learning—in different situations. Learning style instruments have become popular in education and tend to use a combination of different style dimensions that can be quite complex.

The business world has also adopted the idea of cognitive style, in the form of professional decision-making styles. In management, researchers have been intensely focused on the “right brain-left brain” idea, which is frequently invoked in style categorization. The most popular bipolarity, for example, is that of analytic/intuitive (thought to correspond to left- and right-brain, respectively). Kozhevnikov was quick to point out, however, that this theory has no basis in neuroscience. Lastly, in parallel to education learning style instruments, business has likewise incorporated its own instruments for identifying personal styles. The most famous of these is the well-known Myers-Briggs Type Indicator (MBTI).

Beginning in the late 1990’s and early 2000’s, recent studies from cross-cultural psychology and neuroscience have demonstrated that culture-specific experiences may affect distinct patterns of information processing. Kozhevnikov reported that these “cultural-sensitive individual differences in cognition” have been identified at cognitive, neural, and perceptual levels, and appear to be shaped in part by socio-cultural experiences. Several studies, for example, have explored these transcultural differences in East Asian and Western populations. Researchers identified greater tendencies among East Asian individuals to engage in context-dependent cognitive processes, as well as to favor intuitive understanding through direct perception rather than an analytic approach involving abstract principles. Moreover, these individual differences appear independent of general intelligence. At least one participant expressed initial reservation about such research, remarking that the talk of the East-West binary tends to postulate artificial groups (e.g. what exactly is “Eastern culture”?).

Nevertheless, the finding that cognitive style can be represented by specific patterns of neural activity—independent of differences in cognitive ability measures—lends support to the validity of the cognitive style concept. According to this picture, then, Kozhevnikov redefines cognitive style as “culture-sensitive patterns of cognitive processing that can operate at different levels of information processing.”

Assuming this research is on the right track, the next question becomes: how many cognitive styles are there? As we have seen, early studies on cognitive style proliferated a large number of styles and dimensions, which further multiplied with the introduction of learning and decision-making styles. A unitary structure, such as the analytical/intuitive binary common in business circles, fails to capture the complexity of styles. More recent theories have therefore proposed multilevel hierarchical models, which include both a horizontal (e.g. analytical/holistic) level and a vertical dimension to reflect different stages of information processing (e.g. perception, thought, memory). Thus, different stages in processing reflect different cognitive styles.

Building upon this important theoretical modeling, Kozhevnikov proposed a model of cognitive style families with orthogonal dimensions. According to this proposal, it would be possible to map all the different proposed styles onto a matrix with 4×4 cells. On the horizontal axis are different dimensions that include context dependency/independency; rule-based/intuitive processing; internal/external locus of processing; and integration/compartmentalization. The cells on the vertical axis correspond to levels of cognitive processing, such as perception; concept formation; higher-order cognitive processing; and metacognitive processing. Kozhevnikov suggested that this theoretical framework offers a means of categorizing and unifying the array of style types and dimensions—from traditional styles to learning and decision-making styles—into a single matrix with multiple cells that accentuate both the relevant horizontal and vertical dimensions.

 

References

Ausburn, L. J., and Ausburn, F. B. 1978. Cognitive Styles: Some Information and Implications for Instructional Design. Educational Communication and Technology 26: 337-54.

Kolb, D. A. 1974. On Management and the Learning Process. InOrganizational Psychology, ed. D. A. Kolb, I. M. Rubin, and J. M. McInture, 239-52. Englewood Cliffs, NJ: Prentice Hall.

Posted in 2013 Archives | Leave a comment

Is Racism a Psychopathology?

Friday, December 6, 2013

Sander Gilman

CLICK HERE FOR LINK TO PODCAST

In the third and final CMBC lunch talk of the 2013 fall semester, Dr. Sander Gilman (Graduate Institute of Liberal Arts, Emory) treated participants to an engaging presentation on the interconnected history of racism and mental illness in Europe and America during the nineteenth and twentieth centuries. The topic of the talk grew out of a CMBC-sponsored undergraduate course and graduate seminar offered by Dr. Gilman last fall, titled “Race, Brain, and Psychoanalysis.”

Gilman opened by citing a 2012 study conducted by an interdisciplinary team of scientists at Oxford. Based on clinical experiments, they reported that white subjects who were given doses of the beta-blocker drug Propranolol showed reduced indicators of implicit racial bias. The authors of the paper wrote that their research “raises the tantalizing possibility that our unconscious racial attitudes could be modulated using drugs.” Time Magazine soon thereafter ran a headline story with the title “Is Racism Becoming a Mental Illness?” Dismissing these claims as unscientific, Gilman instead posed a different set of questions: at what point, historically, does racism come to be classified as a form of mental illness? Why? And what are the implications of such a “diagnosis”?

The strange marriage of racism and mental illness traces back to the development of the so-called “science of man” in nineteenth century Europe. At this time, notably in Germany, the nascent discipline of psychiatry was attempting to win status as a legitimate science. Psychologists turned their attention to the issue of race and sought to clarify the connection between race and morals on the one hand, and mental illness on the other. Across the ocean, the assumption in America was that African Americans’ desire to escape the bondage of slavery was symptomatic of an underlying insanity. In Europe, by contrast, the intellectual discussion about race and mental illness concerned the population of European Jews, who, unlike African American slaves, were able to participate in public intellectual life. The pressing question facing European scientists had to do with the extremely high rates of mental illness among Jews. Scholars debated whether it was due to inbreeding or a consequence of domestication and self-isolation. More interesting, however, is the fact that even Jewish scholars accepted the basic supposition that Jews displayed high rates of insanity. Indeed, some concluded that such illness could only be explained as the result of 2,000 years of persecution. The main point about this period of history, then, is that minority groups such as European Jews and African American slaves were thought to suffer from a universal form of mental illness, and this theory was now supported by a distinctly “scientific” diagnosis.

Then a major shift occurred at the turn of the nineteenth century as scholars began to focus on the oppressor. If mental illness among Jews was indeed caused by a long history of persecution, then what explains racism itself? The question was especially relevant to Jewish thinkers who were trying to understand the factors that might prompt Jews to leave Europe and found their own Jewish state. Thus, in a proto-Zionist pamphlet written in 1882 and titled “Auto-Emancipation,” the Russian Jewish physician Leon Pinsker coined the term “Judeophobia” to designate the mental illness not of the oppressed minority, but of the persecutor. Pinsker claimed that Judeophobia was not unique to any one race, but was instead the common inheritance of all peoples who had ever interacted with the Jews—in other words, nearly everyone. The term was presented as a disease, a psychic aberration that was hereditary and incurable. This new model of race and mental illness therefore inverted the prevailing view: it was no longer the victim, but the racist who was crazy.

Whereas the apparent madness of the Jews and African Americans was based on their status as races or biological entities, the larger global population of Judeophobes was not based in biology. To classify the madness of this diverse collective entity, psychologists at the end of the nineteenth century invented the notion of the “crowd.” A parallel was thereby established between race and the (German) crowd. In the early part of the twentieth century, Freud adopted the idea of the crowd into his theory of group psychology and claimed that racism was a prime example of such crowd madness. Moreover, he argued that it was universal and fuelled by the tendency of the crowd to see itself as biologically different from other crowds. The idea of racism—and anti-Semitism, in particular—as a form of psychopathology, became a common view by the 1920’s and 30’s.

But the pendulum swung back yet again as scholars in central Europe (many of whom were Jewish) expressed renewed interest in the status of the victim. Scientists speculated that, as a consequence of racial discrimination, there must be a residual aspect of self-hatred among the oppressed. Not only was racism itself a sign of psychopathology, but the response to racism also came to be viewed as a form of mental illness. Anna Freud wrote about the tendency of the victim to identify with the aggressor. For example, in studying Jewish children who had escaped to America after World War II, she noticed that when they played the game of “Nazis and Jews”—the German equivalent of “Cowboys and Indians”—all the Jewish kids wanted to be Nazis. This tendency, scholars argued, pointed to underlying psychological damage.

To summarize the story to this point: we started with the notion that certain oppressed, biological minorities are by definition mad; this hypothesis then proceeded to the idea that the racist perpetrators were the ones who are mentally ill; and finally ended up with the suggestion that the perpetrators’ own psychopathology (i.e. racism) is in fact the cause of the victim’s madness.

Meanwhile in the United States, the debate about these questions became a central issue. Social psychologists, in particular, were among the first to pick up the notion of the victim’s self-hatred as the result of exposure to negative race patterns. Two leading figures in this regard were the American Jewish researchers Eugene and Ruth Horowitz, whose work on black racial identity was adopted by the African American psychologists Kenneth and Mamie Clark in the 1940’s and early 50’s. The Clarks are perhaps most famous for their “doll studies,” in which children were presented with plastic diaper-clad dolls identical in appearance except for color. The researchers were interested in who selected what color doll and found that black children in segregated schools in the south consistently chose white dolls. The researchers concluded from these experiments that prejudice and segregation cause black children to develop a universal sense of inferiority and self-hatred.

Gilman noted that while the doll studies are problematic in a number of respects, the important point about this line of psychological research is that it moved the discussion in the United States from looking at the politics of prejudice to thepsychology of prejudice—what Gilman referred to as the “medicalization of prejudice.” The movement initiated by the Clarks had several important ramifications. On the one hand, the movement played a positive role in the push to end segregation. On the other hand, the NAACP and other civil rights organizations invariably began to make legal arguments via the doll studies by invoking the universal psychological damage caused by segregation and racism. Such damage, for instance, was behind the reasoning in the famous Brown v. Board of Education decision. In addition, by focusing on the mental injury suffered by the victim, the madness of the perpetrator was forgotten. In short, as psychological evidence was introduced as the primary means to influence jurisprudence in America, the victory of ending segregation came at the cost of defining all African American children raised under segregation as psychologically damaged.

With his closing remarks, Gilman mentioned a powerful counterargument to the “racism as mental illness” theory, first advanced by the political theorist Hannah Arendt in the 1950’s. Arendt made the simple claim that racists are actually normal people—to be sure, they are bad people, but normal people nonetheless. In agreement with this view, Gilman argued that the medicalization of social phenomena and the intense focus on the damage of the victim is unhelpful because it tacitly exculpates the racists themselves.

During a stimulating discussion following the talk, Gilman went on to highlight the dangers inherent in the claim that victims of racism suffer from universal mental illness. First, the claim is too general; it defines ab initio all members of a group as damaged in exactly the same way, when in reality not all people suffer from psychopathology. Second, the African American researchers who conducted the experiments present an obvious exception to their own rule of universal damage. Lastly, with respect to the doll studies, the theory cannot explain those white children who chose black dolls. In the end, Gilman made a forceful case against psychologically based arguments against racism that invoke the notion of universal damage, for if everyone is “damaged,” then it ceases to be a useful category.

Posted in 2013 Archives | Leave a comment

Hearing Voices in California, Chenai, and Accra

Tuesday, March 5, 2013

Tanya Luhrmann

CLICK HERE FOR LINK TO PODCAST

The second CMBC lunch of Spring 2013 was a bit unusual in that it featured only one speaker.  Tanya Luhrmann (Anthropology, Stanford University), known for her work on modern-day witches, evangelical Christians, and psychiatrists, shared some of the findings of her recent research on the auditory hallucinations of schizophrenics.  Her guiding question was: does the experience of hearing voices shift across cultural boundaries?
The presentation began with an audio clip designed to approximate the experience of someone hearing distressing voices.  The eerie clip consisted of a background of ambiguous murmurs and whispers which resolved into discernible words and phrases, such as “Don’t touch me”; “Stop it”; “I came for you.”  A majority of schizophrenic people hear voices, but is the experience the same across cultures?
To answer this question, Luhrmann conducted a comparative study of three groups of twenty schizophrenic people from three culturally distinct places: San Mateo, California; Accra, Ghana; and Chennai, India.  Results were drawn from participants’ responses to standardized questions about the nature of their auditory hallucinations.  Participants were asked questions like: How many voices do you hear?  How real are the voices?  Do you recognize the voices?  Do you have control over the voices?  What causes the voices?  And so on.  On the basis of these interviews, Luhrmann identified some clear differences in the experience of auditory hallucinations across cultures, which she described in her presentation.

 

Americans with schizophrenia, in general, explained Luhrmann, see themselves as suffering from mental illness, are comfortable with identification as “schizophrenic,” and have a sophisticated understanding of diagnostic criteria. The voices they hear are most often violent, making commands like “cut off their head,” or “drink their blood.”  Positive experiences of the voices are only seldom reported.  The voices tend to be unknown, unwelcome, and most American participants reported little interaction with them.

 

The study suggested that Ghanaians, for their part, rarely talk about hearing voices in terms of schizophrenia, but rather tend to give voices a spiritual interpretation.  Many Ghanaians were reluctant to talk about their mean voices, which they often regard in demonic terms, because another voice, identified as that of God, warned them against paying any attention to them.  Luhrmann noted that compared to Americans, Ghanaians exhibited a more interactive relationship with their voices, and in about half the cases, considered their voices to be positive.

 

Finally, in the case of the Indian participants, two distinctive features stood out. Firstly, the voices were more often identified as those of kin than was the case with Americans and Ghanaians; and, secondly, the voices, whether they were identified as good or bad, often consisted in mundane practical injunctions, such as not to smoke or drink or to eat this or that food.  In these cases Luhrmann noted even more interaction, which often took the form of a playful relationship with the voices.  Luhrmann described the relationship that one woman had with the voice of the Indian god, Hanuman.  The voice initially would tell her to do despicable things like to drink out of a toilet bowl, but eventually the relationship became such that she would have parties with Hanuman, they would play games together, and she would tickle his bottom.  Luhrmann added that the voices the Indian participants heard would more often talk about sex than in the other cases.

 

Having outlined some striking differences in the auditory hallucinations of schizophrenics across cultures, Luhrmann went on to speculate about the reasons for the differences.  Her central proposal highlighted differences in local theory of mind.  At some point between the ages of 3 and 5, children come to learn that others’ behavior can be explained in terms of what they understand.  When children come to learn in this way that other people have minds, developmental psychologists say they have developed a “theory of mind.”  Luhrmann explained that while theory of mind per se is universal, there are cultural variations in how mind is understood.  In the case of Americans, Luhrmann suggested, there is a sense that the mind is a place, and that it is private.  The experience of foreign voices is consequently viewed as intrusive and unwelcome.  In Accra, by contrast, according to Luhrmann, the boundaries between mind and world are seen as more porous.  Many Ghanaians believe that thoughts can hurt people whether they are intended to do so or not – a belief consistent with the prevalence of witchcraft in Ghanaian culture, and with the emphasis on keeping thoughts clean, for example, in prayer.  Luhrmann explained how this view of the mind as porous is seen in Chennai as well, where it is commonly believed that seniors should know what juniors should be thinking.

 

In closing, Luhrmann reflected on the significance of her findings, singling out two upshots in particular.  First, before Luhrmann’s work, most psychiatrists had not considered that auditory hallucinations might differ significantly across cultures.  The second, and perhaps more consequential, contribution concerns the treatment of schizophrenic patients.  Luhrmann mentioned some pioneering clinicians in Europe who argue that the auditory hallucinations of schizophrenics could be rendered less distressing if patients were taught to interact with their voices.  In Chennai, in general, schizophrenia has a more benign course than it does, say, in the United States.  There are a number of possible explanations for this, including the fact that patients generally remain with their families, and that there is little stress on the diagnosis of mental illness.  But part of the explanation could well be the interactive relationship with voices that Luhrmann’s research found to be a feature of Indian schizophrenic experience.  If so, then Luhrmann’s work provides support for the European theory and points the way to an effective form of treatment.  Let it be clear that even if people report a positive experience of voices, schizophrenia is usually still an unpleasant affliction.

 

As a philosopher, I find the notion that interacting with hallucinatory voices may be palliative and, hence, encouraged by psychiatrists rather curious.  I say “as a philosopher” because philosophers have tended to hold some variation on “the truth sets you free” theme.  Spinoza, for example, thought that when we understand the causes of our afflictions, they afflict us less, because they are more in our power.  To encourage interaction with hallucinatory voices seems, at least, tantamount to encouraging a form of magical thinking.  If Chennaians can have positive experiences of voices and their schizophrenia is more benign as a result, then there may well be reason to fear that any “cure” might be worse than the disease.  It is not so clear that Americans, however, could so easily strike up the kind of interactive relationship with their voices that Chennaians maintain, if, as Luhrmann contended, Americans’ view of their voices is tied to a more fundamental theory of mind, which might be both difficult and undesirable to uproot.

 

The reason I imagine it might be difficult is the same as the reason it might be undesirable: alternative theories of mind might simply be false, or at least inferiorqua theories.  Even if there is much about the mind we do not know, surely there is something true about the belief that thoughts are only in our heads; and surely the notion that the mind is not something that can influence the external world on its own is not equally tenable as the notion that thoughts by themselves can cause harm to other people.  Perhaps Americans with schizophrenia could come to interact with their voices in the same way that a parent might play along with their child’s personification of a stuffed animal.  And perhaps such a playful comportment would be palliative.  But then the difference is not in theory of mind, since, presumably, parents do not change their conception of how minds work when they play make-believe with their children; rather, the change is in the attitude adopted.  Would pretending that the voices are real, i.e., play-personifying them, suffice to render their presence sufferable, even a positive experience?  Or does such an interactive comportment with the voices only truly work if it is rooted in a theory of mind whereby such pretense is unnecessary?

 

There is some prima facie reason to think that merely pretending that the voices are real so as to facilitate a more interactive relationship might actually help. Even though it would not constitute or effectuate Spinoza’s prescription to understand the causes of an affliction, it might still bring about the desired result of bringing the affliction more within one’s own power.  Treating the voices as objects of play would subject the voices to rules of one’s own making, and this element of control might mitigate the distress of hearing voices.

Posted in 2013 Archives | Leave a comment

Perspectives on the 2012 Presidential Election

Tuesday, February 26, 2013

Alan Abramowitz and Drew Westen

CLICK HERE FOR LINK TO PODCAST

In the first CMBC lunch of the 2013 Spring semester, Alan Abramowitz (Political Science, Emory University) and Drew Westen (Psychology, Emory University) offered contrasting analyses of the results of the 2012 presidential election and what they suggest for future elections.  The contrast (mercifully) wasn’t Republican vs. Democrat.  It was, in part, a fascinating disciplinary contrast between the perspectives of a political scientist and a psychologist; and, in part, a contrast reflective of internal Democratic angst: could the Democrats, and Obama, in particular, have done more, or is that wishful thinking given current political realities?

 

From the political science perspective, Abramowitz offered a tripartite explanation of Obama’s larger than expected 4 point margin of victory.  First, 2012 saw modest economic recovery.  Had the U.S. economy suffered another dip into recession, the results would most likely have been much different.  Second, Obama enjoyed the first-term incumbency advantage, which has seen every President to re-election in the last 100 years with the exception of Jimmy Carter. The third reason concerned deep partisan division.  It is perhaps easiest to see why Obama won when you couple the fact that 92% of Democrats voted for Obama with what Republican strategists seemed to be blind to or in denial about until (and, in the case of Karl Rove, even after) the election results had become undeniable: thanks to a rapidly evolving demographic landscape, Democratic voters now outnumber their Republican counterparts.  Abramowitz explained that Obama’s victory wasn’t due to Hurricane Sandy, or the brilliance of his campaign, but rather to the changing face of the American electorate.  Although Romney won the white vote by 20 points, Obama won 80% of the non-white vote, which accounted for almost 30% of the electorate.  If non-whites continue to vote in such proportions for Democrats, the prognosis for Republicans is not good: currently 50% of Americans under the age of 5 are non-white.  Abramowitz also noted, more generally, that young voters today are more liberal on issues like gay marriage, legalizing marijuana, and abortion than older voters; and that this represents a true generational shift, and is not just a function of age.

 

Despite this unfavorable demographic outlook, because of gerrymandering, Republicans managed to win a majority of House seats in states like Pennsylvania, Wisconsin, and Michigan, where they lost the popular vote in state-wide elections.  As for the Senate, Republicans should now control it, too, were it not for the unelectable candidates generated by their primaries.  Since rural states like Wyoming (which, Abramowitz pointed out, has a population smaller than DeKalb county’s!) get no fewer Senators than New York and California, and tend to be Republican, there is a Republican advantage built in to the structure of the Senate.  For these reasons, it will be difficult for Democrats to win back the House and keep the Senate in coming elections.  Moreover, in 2016 the Democrats will lose the advantage of first-term incumbency, so it is possible for the Republicans to take back the White House.  Then again, by that time Republicans will face an even more hostile demographic landscape.

 

From a psychologist’s perspective, Westen’s presentation focused on the role of messaging in explaining the successes and failures of the two parties and their candidates.  In addition to echoing Abramowitz’s point regarding demographic disadvantages, Westen noted how Romney ran a particularly myopic campaign, failing to anticipate obvious questions about his wealth and finances.  Westen suggested how Romney could have easily dealt with the drawn-out issue of his tax returns, for example, by making clear that while he pays low rates, he gives generously to charitable causes.  This would have played to many Americans’ conviction that personal responsibility and charity, not government intervention, is the proper solution to social ills.  Beyond Romney’s campaign misadventures, Westen emphasized that there was a deeper, structural factor that accounted for the Republican defeat.  Over the past 30 years, Republicans have created a monster of a message machine.  According to Westen, Fox News and similar outlets have made 40% of the electorate remarkably disinformed and racist. While the enthusiasm Republican media fuels has helped to win House seats and was instrumental in the elections of George W. Bush, it backfired in 2012.  The Republican base proved at once large enough (and vocal enough) to set the tone of the Republican primaries (which Westen compared to a clown show), but too small to ensure the victory of its candidate at the national level, where a non-white electorate had unprecedented clout.  In reality, argued Westen, Romney is not that much different politically from Obama.  Both the Romney who governed Massachusetts and President Obama have governed from the center-right.  (On Obama’s right-leaning governance, Westen cited his failure to stand up for immigrants, his “evolving” views on gay marriage, hedging on abortion coverage in the healthcare bill, and the contraception coverage loopholes therein.)  But the Romney who governed Massachusetts could not get elected by the Republican primary process – he needed to pivot hard right to appease the Republican base. By 2040, moreover, the United States is projected to be a “majority minority country,” which is to say, the majority will be made up of non-whites.  If Republican presidential candidates continue to have to appeal to a disinformed, racist Republican base, they will have no chance of winning on the national level.

 

But the Democrats have messaging problems of their own, according to Westen, in particular, the lack of a clear and consistent message.  Whereas during the New Deal era, Democrats were defined by a clear commitment to taking care of those in need, the message of today’s Democrats, Westen contended, must pass through the filter of wealthy campaign contributors, and what comes out on the other side is significantly diluted.  In this connection, Westen cited Democrats’ talk of cutting social programs, regressive payroll taxes, and fiscal stability.  In the end, for Westen, the Democrats’ mixed messaging is a symptom of a deeper problem: lack of strong leadership.

 

Westen’s critique of Democratic leadership provoked a debate with Abramowitz on the question that has been raised about Barack Obama, even by his ardent supporters, in response to his first term in office: could Obama have achieved more of the progressive agenda he campaigned on in 2008, given the wave of enthusiasm and support that swept him into the White House, if he had been more of an effective leader, or, given Republican recalcitrance and ill-will towards him, expressed, for instance, in Mitch McConnell’s stated commitment to make Obama a one-term President, did Obama do as well as anyone could have?

 

Westen’s contention that Obama could have done more stemmed from his analysis of political messaging.  Westen provided a number of examples of how relatively simple tweaks in the formulation of a political message can make all the difference in how it is perceived by both politicians and the broader public.  To take one, Westen said that the White House should never have talked about a “public option,” since the word “public” tends to make people think of overcrowded waiting rooms and mediocre healthcare.  Westen said that a message as simple as “we’re going to let people buy Medicare and Medicaid” would have avoided the negative connotations of anything “public” and the ensuing furor provoked by the “public option.”  Westen also suggested that the White House could have performed better in the tax debates with a message like the following: “In tough times like these, millionaires ought to be giving to charity, not asking for it.” Westen’s fundamentally optimistic view of what can be achieved with the right message was met with skepticism by Abramowitz, who reiterated his original point about the unprecedented extent of political polarization.  Ultimately, it is difficult to say for sure what would have been possible, if only….  One potential basis for such hypotheticals is offered by historical parallels, but there can always be questions about the extent to which historical precedents are actually parallel in the relevant respects.  Abramowitz rejected Westen’s invocation of LBJ as evidence that Obama could have done more, on the grounds that LBJ enjoyed much larger Democratic majorities than Obama has had, and enjoyed them for a significantly longer period of time.

 

It is perhaps not surprising that no agreement was reached on how to assess Obama’s presidency.  After all, the history is still being made.  I want to underscore, in closing, the significant points on which Abramowitz and Westen agreed.  In the first place, there was clear consensus that the Republican party is badly plagued by structural issues, and that these boil down to demographic factors at odds with what has emerged over the past 30 years as the Republican platform.  Second, both Abramowitz and Westen agreed that the Democratic message is not as clear and consistent as perhaps it might be, although there was disagreement about what this means.  For Westen, it signals a lack of clear, strong leadership of the FDR or LBJ variety; for Abramowitz, on the other hand, it has more to do with the nature of the Democratic party, which has always been more pragmatic than ideological.  This raises an interesting question about the kind of choice we face as voters.  Assuming a two-party system, would we rather have two parties representing competing ideologies, or rather one ideological party and one non-ideological, but pragmatic party?  Which presents the starker contrast?  Which affords the greater chance of bipartisan agreement?  These are some of the questions that Obama’s leadership style has prompted, and I take it that they remain very much open and evolving.

Posted in 2013 Archives | Leave a comment

Theories of (Embodied) Mind: Some Thoughts and Afterthoughts

Monday, October 22, 2012

Michael Moon and Elizabeth Wilson

In Spring 2011, Michael Moon (Graduate Institute for Liberal Arts, Emory) and Elizabeth Wilson (Women’s, Gender, and Sexuality Studies, Emory) teamed up to teach a CMBC-sponsored graduate seminar with the title “Modern Theories of Mind: From Austen to A.I.”  As the subtitle indicates, the course cut across traditional disciplinary boundaries, and the aim was to carve out and investigate a specific area of inquiry, which Moon and Wilson, taking up a recently emerging thread in the critical humanities, call “theories of mind.”  The lunch was an introduction to, and reflection on, the new field that their graduate seminar braved to explore.

 

So what is this new field, “theories of mind”?  Wilson, who spoke first, laid out the fundamentals.  Theories of mind needs to be seen against the backdrop of two prominent intellectual strains.  On the one hand, there is analytic philosophy of mind, which, in its current incarnation, aims to understand the relationship between consciousness and the brain, and draws on evidence from neuroscience and neuropathology, in addition to more classical conceptual analyses.  On the other hand, there is the sort of post-structuralist approach characteristic of the so-called “critical humanities,” such as feminist theory and post-colonial studies, which attempt in various ways to advance beyond the Cartesian conception of mind, by focusing on the ways socio-cultural dynamics influence subjectivity. Theories of mind sees itself as an alternative to both of these approaches.  In contrast to the study of subjectivity in the critical humanities, theories of mind embraces rather than flees from the idea of mindedness.  In contrast to analytic philosophy of mind, theories of mind seeks to expand the conception of mindedness beyond the full-functioning adult brain in an effort to encompass other dimensions of embodied mindedness, such as visceral, infant, technological, and animal.

 

While theories of mind is broader in scope than most analytic philosophy of mind in going beyond the brain, as it were, it is also focused on a narrower phenomenon: mind attribution.  To what do we attribute mindedness?  What reasons do we have for doing so?  And what are the implications one way or the other?  This is the constellation of questions with which theories of mind concerns itself.  In psychology and philosophy of mind, to treat another human being as if it has a mind is to possess a “theory of mind.”  Wilson described a well-known test for theory of mind – the “Sally Ann test.”  Sally and Ann are puppets in a show that young children of various ages are watching.  The children see that Sally puts a marble in her basket, and then goes off somewhere.  During her absence, Ann takes the marble from Sally’s basket and puts it in her own.  If the children are asked where Sally will look for the marble upon her return, the answer depends on whether or not they have a theory of mind.  Three-year-olds, who lack theory of mind, say that Sally will look in Ann’s basket, because they do not distinguish between what they see and what Sally was able to see.  Five-year-olds, who have developed a theory of mind, on the other hand, are able to make the distinction, and realize that Sally did not see Ann take the marble, and so will assume that it is still in her basket.  The attribution of mindedness is interesting insofar as, at least on one interpretation, we have no direct evidence of any mind but our own.  That is why it is called a theory of mind.  Theories of mind – in the plural – takes its line of questioning from theorof mind, but extends it to raise the question of mind attribution to animals, and robots, for example, as well as the social significance of such attribution.  The idea behind theories of mind is that by opening up the question in this way, we might advance our understanding of subjectivity, and, in general, what it is to have a mind.

 

Moon’s presentation consisted in an overview of the 2011 graduate seminar itself – the students involved, the readings, the assignments – as well as some ex post facto reflections.  There were seven students and they read three novels: Jane Austen’s Persuasion, Mary Shelley’s Frankenstein, and The Call of the Wild by Jack London.  Each novel provided unique fodder for the sorts of analysis characteristic of “theories of mind.”  In discussing Persuasion, Moon introduced the idea of “free indirect discourse,” a style of narrative that combines elements of both first- and third-person perspectives, and a topic discussed at more length in the ensuing Q&A.  Here is an example of free indirect discourse from Persuasion:

 

How Anne’s more rigid requisitions might have been taken, is of little consequence. Lady Russell’s had no success at all–could not be put up with–were not to be borne. What! Every comfort of life knocked off! Journeys, London, servants, horses, table,–contractions and restrictions every where. To live no longer with the decencies even of a private gentleman! No, he would sooner quit Kellynch-hall at once, than remain in it on such disgraceful terms.

 

In this passage, Austen describes Sir Walter’s distraught reaction to the notion of renting out Kellynch Hall, his ancestral home.  “He” is referred to in the passage, so that we know the narration is third-person, and yet the thoughts are to be taken as the very ones in Sir Walter’s head.  There is a blend of the narrator’s interpretation of the thoughts of the character – which brings with it potential formisinterpretation – and also the direct report of what is going on in the character’s head, as could be related by an omniscient narrator (immune to misinterpretation).  Free indirect discourse therefore engages some of the themes of interest to theories of mind: to what extent are we able to attribute mindedness to others?  To what extent is this attribution always colored by our own conceptions?

 

In the seminar, such questions were also explored in the context of artificial intelligences like Frankenstein, as well as animals such as Buck, the dog of The Call of the Wild, a novel which also employs free indirect discourse as a means of exploring thought attribution to animals.  Moon made the surprising and interesting point that, for theories of mind, anthropomorphic attributions are not necessarily to be shunned.  I found this striking and, strangely, rather uplifting. In my own experience of graduate seminars on post-structuralism, I was taught that anthropomorphism cancels out the “otherness” or “alterity” of the other. Surely this is a valid concern.  However, if you can’t use analogical means to understand the other, what can you use?  Perhaps it must be admitted that in some cases, at least, anthropormphic understanding is better than none at all.  This possibility seems to be most clearly relevant in the case of animals, with respect to which it seems often hard for humans to grant any mindedness because of the obvious differences between humans and animals (especially non-primates). Perhaps a dose of anthropomorphism is precisely what is called for!

 

This recalibration of our assessment of anthropomorphism seems to me to be exactly the kind of re-thinking that theories of mind promises to foster.  Whereas post-structuralism is right to point to the dangers of analogical mind attribution, it winds up leaving little room for mind attribution at all.  In this case, theories of mind recovered the phenomenon of mindedness, as promised, yet with a focus on the dynamics of such attributions.  In doing so, it contributes a useful alternative to both the post-structuralist analysis and analytic philosophy of the mind/brain. It will be interesting to see how the field develops.

 

Posted in 2012 Archives | Leave a comment

Images in the Mind

Friday, September 28, 2012

Laura Otis and Krish Sathian

CLICK HERE FOR LINK TO PODCAST

Laura Otis (English, Emory) and Krish Sathian (Neurology, Rehabilitation Medicine and Psychology, Emory) came together in the first CMBC lunch of the year to discuss the formation of mental images.
Otis spoke of work towards a book to be titled, Thirty Thinkers, which draws from interviews with a range of creative people about the phenomenology of their mental imagery.  Inquiry into the phenomenology of mental imagery is nothing new in itself, even if it is only in the last forty years that the study of mental imagery has been taken seriously as a subject of scientific research.  Continental phenomenologists like Sartre, Husserl, and Merleau-Ponty pioneered the systematic study of the phenomena of imagination in the first half of the twentieth century.  However, they did so according to the methodology of classical phenomenology – by describing what they took to be the essential structures of the formation of mental imagery by conscious subjects.  In contrast, the interest and promise of Otis’ empirical approach lies in what it reveals about the range of ways in which people form mental images.  The diversity of mental image formation from individual to individual emerged as one of the key take-aways from the lunch.

 

If you close your eyes and think of the word “bridge,” what do you see?  As you read fiction, what sorts of images do you form in your mind?  These are the sorts of questions that Otis asked the participants in her book research, and she posed them to the lunch participants as well.  The latter reported bridge images ranging from “a covered bridge in Indiana” (absent color) to “the river that runs under a bridge” to less specific, more generic images.  No one reported no images at all, but one participant testified to knowing someone who claims never to form mental images.  The range of responses at lunch was in line with the findings of Otis’ book research.  In response to queries about images formed while reading fiction, some participants in Otis’ study said that they read fiction because forming images is intrinsically pleasant; others reported not seeing anything at all and enjoying simply following the play of language.

 

In the 1970s it became popular to distinguish people who are more verbal in their thinking from those who are more visual or spatial on the grounds of functional differences between the brain’s hemispheres.  While it might seem that participants’ reports in Otis’ study reinforce this dichotomy – with those who read for the words, on the left side, as it were, and those who read for images, on the right – Otis urged that the situation is in fact more complicated, citing Maria Kozhevnikov (Radiology, Harvard Medical School), whose research has challenged the old linear spectrum (from verbal to spatial), and who will speak on the subject of differences between object and spatial imagery at a CMBC-sponsored event on October 10th.

 

Sathian posed the question of mental imagery from the neuroscientific perspective: what are the neural substrates of image formation?  How much overlap is there between the neural pathways involved in mental imagery with those involved in visual perception?  Evidence suggests that the overlap is substantial, as might be expected.  PET scans and functional MRIs show that visual cortical areas are active in the formation of mental images.  Damage to these cortical areas, moreover, interferes not only with perception but also with image formation: patients with right parietal stroke, for example, who suffer from neglect of the left side of the spatial field, not only can’t see the field, but they fail to mention anything on the left side when asked simply to imagine a familiar, remembered scene.

 

Although visual perception and mental imagery share neural substrates, Sathian underscored important differences: while visual cortex subtends the images of both visual perception and mental imagery, the neural pathways that lead to image formation in each case are very different.  Mental imagery stems – in “top-down” fashion – from the frontal cortex, while what we see is received through the eyes.  Sathian was careful to qualify this dichotomy, however, pointing out that in actual cases of perception, and especially in cases of perceiving unfamiliar objects, the activity of mental image formation plays an important role in “interpreting” the data, and therefore in determining the images we actually see.

 

One of Sathian’s most intriguing points concerned non-visual imagery.  Object properties are encoded in multi-sensory representations, not just visual ones.  One interesting question of on-going debate is whether spatial imagery is specifically visual, or rather “amodal”?  The question of the relationship between different kinds of sensory imagery is a fascinating one.  In this regard, Sathian left off with a reference to Oliver Sack’s recent book, The Mind’s Eye, in which Sacks discusses the phenomenon of blind people actually becoming more visual in their image formation after the loss of vision.  This would seem to underscore the connection between the tactile and the visual (along with providing an interesting variation on the famous problem of Molyneaux, who in a letter to John Locke asked whether a person born blind, able to distinguish a sphere and cube by touch, would be able, with vision restored, to distinguish the objects with sight alone).

 

What about a friend who claims never to form images?  Is it possible that images could never appear in someone’s mind?  I must admit, being someone who forms images easily and takes the delight in them that I do in vivid dreams, it is (somewhat ironically?) hard to imagine an inner life without them.  Perhaps those who report no images are merely misreporting their inner world?  Perhaps they are merely less conscious of the images they are forming than others?  In response to this line of questioning, Sathian pointed out that there is some tendency for brain scans to correlate with reports of the vividness of imagery, suggesting that if someone reports a lack of imagery, there may well be some neural basis to the claim.  In addition, Sathian explained, even if someone lacks vivid visual imagery, they most likely form other kinds of sophisticated sensory image, such as auditory or spatial.  For her part, Otis offered the image-less friend her vote of confidence, cautioning against the presumption that others’ inner worlds need be akin to our own.  In illustration, she drew an analogy that elicited laughs from the audience: we don’t talk to one another about our bathroom routines, and in consequence just imagine that everyone’s is just like our own. We cannot know, however, whether this is the case.  Likewise, since we never see what is going on in other people’s minds, we naturally imagine it to be just like our own.  This is hasty.

 

The discussion of the image-less friend nicely encapsulated what were perhaps the chief upshots of the lunch: on the phenomenological side, there is increasing appreciation of the diversity of ways in which and degrees to which people form images in the mind;  on the neural side, advances in brain imaging techniques, and, more generally, advances in the understanding of the interplay of distinct neural functions are helping to make neuroscientific sense of the range of phenomenological reports.  Because of the need to reconcile the phenomenology with the neuroscience, and because of the importance of image formation for the work of both humanists and scientists alike, the study of mental imagery is an apt poster child for collaboration across traditional disciplinary boundaries.  In other words, it was a perfect subject for a CMBC lunch.

Posted in 2012 Archives | Leave a comment

Narrative: Films and Texts

Tuesday, March 20, 2012

Salman Rushdie

CLICK HERE FOR LINK TO PODCAST

In an eagerly anticipated CMBC lunch seminar that filled to capacity minutes after registration opened, Emory University Distinguished Professor and acclaimed writer Sir Salman Rushdie shared his views on the nature and role of narrative in the arts. Focusing on similarities and differences in how narrative functions in literature, film, and television, Rushdie led a fascinating discussion with student and faculty attendees on the challenges of tailoring narrative to the specific medium in which it is presented.

Rushdie began the session by talking about the forces that shaped his thinking about narrative. As a child growing up in Bombay, India, Rushdie was immersed in the narrative tradition of “wonder tales” – folk stories with fantastical elements such as the genies and magic lamps of The Arabian Nights. Despite their extraordinary premises, such stories should not be dismissed as mere escapist entertainment. According to Rushdie, they have the same potential to reveal human truths as more naturalistic forms of writing. The Western notion that realism represents truth is an illusion, Rushdie suggested; fantasy is simply another route to the truth. For Rushdie, the fantastical nature of the stories to which he was exposed as a child served to highlight the separation between fiction and reality, showing how each could inform our understanding of the other. Another major influence on the young Rushdie was the style of cinema now known as Bollywood. At the time, Rushdie explained, popular cinema in India tackled major social issues such as poverty and gender inequality, demonstrating that narrative could be both entertaining and socially significant.

Turning to the function of narrative in literature, Rushdie noted that good literature does not always require a strong narrative thrust. Joyce’s Ulysses, for example, is driven primarily by language and character, not by plot. At the same time, literary fiction and narrative need not be regarded as separate genres, as exemplified by the engrossing works of Dickens and Defoe. Unfortunately, when literature and narrative do diverge, the reader tends to favor the latter, with Rushdie citing as evidence the mass consumption of the Twilight series and other popular works of questionable literary merit. Rushdie believes that the separation of narrative from literature has been to the detriment of literature, and in his own writing, he seeks to bring the two back together. He likens writing to conducting an orchestra, in that the writer possesses many different instruments, each suited to playing different types of music. With each novel, the challenge is to choose instruments that will best showcase the music that the writer wants to conduct. Over the course of a writer’s career, he will ideally make use of the entire orchestra.

Rushdie went on to observe that films, unlike novels, must create narrative engagement – not to mention emotion, intellectual stimulation, and psychological depth – without being able to provide a direct window into the minds of their characters. Whereas a novelist can fully mine a character’s internal life (even without a first-person narrative), and can often enter and exit a character’s mind freely, this interiority is much more difficult to achieve in film. According to Rushdie, the challenge for screenwriters and directors is to find the dramatic action that reveals a character’s thought process – to show, not tell. Skillful screenwriters are able to highlight the difference between what people say and what they think, all from an external perspective. Skillful directors use the camera to create meaning, choosing exactly what the world captured by the camera should contain. Much of the meaning of a film, Rushdie suggested, is created in the editing room, with sequences of shots forming a nonverbal rhythm that dictates how viewers should experience the film. Novels, in contrast, are less prescriptive; because they exist to some degree in the reader’s interior space, there is more active engagement with the work. In film, techniques of cinematography, montage, and music are used to engage viewers in a narrative that is given to them essentially fully formed, rather than shaped and elaborated by the viewers’ own minds.

One issue that came up during discussion was why books tend to be regarded as artistically superior to their film adaptations. Rushdie suggested that the primary reason may be that books must almost always be condensed for the screen. In preparing the screenplay for his novel, Midnight’s Children, Rushdie made a list of scenes that he regarded as critical to the story. As it turns out, half of the scenes will not be included in the final version of the film, to be released later this year. The experience illustrated to Rushdie the need to consider the essence of his novel – the parts of the story that, if omitted, would result in the film no longer being an adaptation of the novel. “Adaptation,” Rushdie mused, “is a great lesson in the fact that the world is real.” It seems that adapting a novel for film is inevitably a balancing act between faithfulness to the original work and the need for purity of storytelling due to the narrative limitations of the medium. Even when a film achieves the right balance, some viewers remain unsatisfied because any deviation from the novel is regarded as unacceptable. I wonder if such purists feel so strongly because they engaged in particularly elaborate mental imagery while reading the novel. Perhaps the richness of one’s internal experience of a novel is inversely related to one’s enjoyment of the corresponding film adaptation.

Some of Rushdie’s most intriguing observations concerned the nature of narrative in television. Rushdie, like many critics today, believes that we are currently in a “golden age” of television drama, due in large part to the creative freedom afforded to writers by cable networks, which place few restrictions on sex, violence, and nudity. Unlike screenwriters, the writer of a television show is typically the central creative artist. The show can also evolve while it is airing, with the audience influencing the narrative through its response to particular characters or plotlines. Moreover, the narrative format is unique in that the story deliberately does not finish; the writer must craft a compelling dramatic arc, but must continually end on a question mark so that the audience keeps tuning in. This open-ended format, Rushdie noted, would be unsatisfactory in a novel or film, in which resolution is expected. Sometimes certain questions remain unresolved even at the end of a series’ run. Rushdie suggested that, because of the inherently serial nature of television, it may be virtually impossible for a series to tie up all loose ends in a way that satisfies die-hard viewers. Nevertheless, the greatest strength of a television series, according to Rushdie, is that it happens over time. Viewers are able to track a character’s emotional life through events spanning months and years, allowing the narrative to take on the complexity of a novel. [Rushdie fans will be delighted to learn that he is currently developing a television series for Showtime called The Next People, with a “paranormal sci-fi” premise.]

Rushdie’s insightful remarks left me wondering how narrative operates in other art forms. Rushdie described film as a descendant of painting and theater, with painting providing the form and theater providing the dramatic conventions. In the visual arts, narrative is most readily apparent in realistic works. In contrast to the wonder tales of Rushdie’s youth, it may be difficult to evoke a sense of narrative in less representational art because of the limitations of the two-dimensional canvas. In theater, there may be greater narrative engagement than in film or television because characters’ internal lives are often more accessible on stage than on screen. Devices such as soliloquies and asides, though perhaps specific to certain theatrical genres, allow the audience into a character’s mind. Moreover, there is a certain narrative freedom to the stage, as two actors can be standing side by side even while their characters are in different places or time periods. Ultimately, the unexpected ways in which narrative can manifest across art forms suggests why we never tire of experiencing new adaptations of our favorite stories.

Posted in 2012 Archives | Leave a comment

Handwriting: The Brain, the Hand, the Eye, the Ear

Friday, February 24, 2012

Evelyne Ender

CLICK HERE FOR LINK TO PODCAST

Earlier this semester, the CMBC hosted a lunch seminar helmed by Dr. Evelyne Ender, Professor of Comparative Literature and French at Hunter College and the Graduate Center at the City University of New York. In a wide-ranging talk, Ender discussed her ongoing work on graphology, the interdisciplinary study of handwriting as a window on human expression and creativity.

From her perspective in the humanities, Ender explores the connection between what art offers and what research in cognitive science has revealed about the mechanisms underlying artistic expression. She is particularly interested in the tools humans use to express their humanity, focusing specifically on handwriting. In a world in which writing increasingly occurs on the computer screen rather than by the tried-and-true method of applying pen to paper, we may easily forget the degree to which handwriting fulfills, as Ender put it, “a deeply ingrained human need for communication.” Moreover, recent work in neuroscience suggests that the act of handwriting may itself give rise to substantial cognitive benefits. Ender pointed to a recent commentary in The Chronicle of Higher Education by Mark Bauerlein, Professor of English at Emory, who argued, on the basis of neural evidence, that instruction in handwriting at a young age facilitates the development of literacy skills, presumably by linking specific hand movements to the visual recognition of letters and words. This tight coupling between human perception and performance suggests that the study of handwriting may uncover clues about the workings of the human mind. A collection of readings selected by Ender for the seminar (on haptics, rhythm and timing, the structure of symbols, and the history of stenography) offered further evidence of the growing scientific interest in handwriting.

Ender’s discussion of her current project, entitled The Graphological Impulse, began with a review of the rather peculiar history of graphology. Nineteenth-century France saw the development of a method of analysis of basic personality and character traits based on the examination of one’s handwriting. This method manifested perhaps most strikingly in hiring practices, with employers demanding handwritten letters of application to be analyzed by a graphologist. In some cases, job candidates whose handwriting suggested a personality profile unsuitable for the desired profession were eliminated from consideration. Because no correlation between handwriting and high-level character traits has ever been empirically established, graphology is now regarded as a pseudoscience. Nevertheless, Ender maintained, an analysis of handwriting may provide insight into what it means to be human. Ender suggested that humans possess a fundamental drive to physically inscribe, scribble, doodle, sketch, outline – just a few of the manual movements we employ to (quite literally) make our mark on the world. Ender characterized this drive as an impulse, strong enough to transcend physical limitations. She recounted the famous case of Nannetti, a patient held in a primitive psychiatric hospital in Italy. Prevented from using writing tools, Nannetti felt such a need to leave a written trace that he carved stories into the walls of the hospital with none other than the metal buckle of his hospital uniform.

Ender’s project is organized as an in-depth case study of the intersection between the composer Frederic Chopin and the novelist George Sand, who came together both creatively and personally for a brief period during the nineteenth century. For Ender, this exchange between two groundbreaking artists highlights the interaction between the creative brain and the external environment, with art reproducing a specific phenomenological experience of being in the world. According to Ender, the exchange between the brain, the body, and the world is no more evident than in the handwritten page. To illustrate this idea, Ender presented slides from a handwritten draft of one of Sand’s classic novels. The remarkable grace and fluidity of the script were evident in the slides, which also demonstrated the complexity of semantic and oral “coding” exemplified by Sand’s prose. Sand is said to have written for long stretches at night in a free-flowing, “disinhibited” manner. Despite such apparent spontaneity, Ender noted that writing page after page of script with minimal corrections, ultimately producing a nearly print-ready piece after some thirty hours of manual labor, is an extraordinary skill. To put pen to paper in such an expert fashion requires an underlying mastery of the mappings between sound and visual form, grammatical knowledge, proficiency with spelling, manual dexterity, fine motor control, among many other sophisticated abilities. As I enjoy the relative luxury of typing this commentary on my computer – making countless edits and deletions, consulting my word processor’s built-in thesaurus, cutting and pasting at will, creating a backup copy with the click of a button – I am even more impressed at Sand’s achievement.

Of particular interest for Ender is the degree to which the quality of an artist’s handwriting is correlated with the fluidity of the creative experience. When Sand swapped pens while writing, for example, did this refreshing of instruments also serve to refresh her ideas? And when she was writing prose that was especially lyrical, did she engage in correspondingly rhythmic auditory imagery? Ender proposed that such questions might be fruitfully addressed through interdisciplinary exchanges between the humanities and the sciences. Cognitive science research exploring the nature of cross-modal sensory representations, such as those between vision and audition, might be particularly informative. Some individuals, known as synesthetes, experience consistent mappings between visual and auditory stimuli (e.g., certain letters and words invariably evoke certain colors), and such mappings have been regarded as exceptional cases of the type of everyday cross-modal associations that we all experience (e.g., the association between speech sounds and the mouth shapes that produce them). It might be interesting to examine whether such perceptually rich representations are more likely to be elicited by writing figurative or metaphorical language than by merely comprehending it. Such a possibility suggests how one important aspect of creativity, namely the ability to draw links between seemingly disparate sensory phenomena, might be operationalized. The richness of one’s mental imagery during the artistic process might, for example, predict the ultimate creative impact of one’s product.

Ender’s presentation left me wondering whether handwriting, rather than providing a unique window on the creative mind, might be better characterized as but one of many skilled, highly automatized behaviors through which we, perhaps unwittingly, express our creativity. For example, although the primary purpose of walking is to get us where we need to go, no two people walk the same way. The idiosyncratic gaits we adopt may, like the distinctive output of our pens, reveal much about our individuality. One might also argue that spoken – as opposed to written – language, in requiring the complex, rapid-fire coordination of multiple parts of the vocal tract to convey intention and meaning, is an even more impressive creative feat (and arguably more fundamental to human expression, given that not all languages have writing systems). With technological advances comes the temptation to bemoan the loss of older, “purer” forms of communication. But although handwriting may be in danger of becoming a lost art, we will surely find other, no less striking ways of manifesting the creative impulse within us.

Posted in 2012 Archives | Leave a comment

Cultural and Neuroscientific Perspectives on Emotion

Monday, October 31, 2011

Jocelyne Bachevalier and Dierdra Reber

CLICK HERE FOR LINK TO PODCAST

The CMBC’s second lunch seminar of the fall semester brought together Dr. Jocelyne Bachevalier (Psychology) and Dr. Dierdra Reber (Spanish and Portuguese), Emory scholars from markedly different disciplines who share common interests in the study of emotion. Despite their differing approaches, the presentations by the two speakers – who represent the fields of affective neuroscience and cultural criticism, respectively – converged on a number of unifying themes, many of which came up in audience members’ subsequent questions. In this commentary, I’ll highlight these points of convergence and offer some additional food for thought (or food for feeling, if you will).

In her opening remarks, Bachevalier stressed the adaptive role of emotion for survival, but noted that only recently has empirical research on affective processes become widespread. Advances in operationalizing such processes, previously regarded as too subjective to be studied systematically, have led to a dramatic increase in neuroscientific research on emotion, including the use of nonhuman primate models. As described by Bachevalier, early research on emotion in nonhuman primates relied primarily on the lesion method to gain insight into the functional neuroanatomy of emotion. Focusing on the region of the brain known as the amygdala, researchers found that lesions to this area in rhesus monkeys produced drastic changes in affective behavior. Among the resulting symptoms, collectively referred to as Klüver-Bucy syndrome, are visual agnosia (the inability to recognize familiar objects) and hypoemotionality (diminished affect, with aggression replaced by docility). Bachevalier observed that a key problem with lesion experiments is that they necessarily involve damage not just to the amygdala, but also to connective fibers from the visual cortex (which may explain the recognition deficits). Avoiding this confound, Bachevalier’s research isolates functions specific to the amygdala by injecting drugs that selectively kill amygdala cells, leaving other fibers intact. Using this method, Bachevalier has shown, contrary to earlier findings, that amygdala damage results not in an overall dampening of emotional responses, but rather in an inability to modulateemotion, whether positive or negative. For example, animals with amygdala damage may show both excessive hostility and a heightened tendency to seek out positive stimuli. Thus, Bachevalier’s research has clarified the critical role of the amygdala in adequately and appropriately triggering emotional responses.

In contrast to Bachevalier’s empirical approach to investigating emotional experience, Reber focuses on the cultural meaning assigned to emotion. In her opening remarks, Reber suggested that there has been an epistemic shift in recent years toward the privileging of body over mind in characterizing how we experience and interact with the world. According to Reber, the current cultural landscape – as evidenced through various forms of media – promotes the apprehension of experience via feelings rather than via the tools of reason. For example, in illustrating Christ’s life primarily through vivid depictions of brutality, the film The Passion of the Christ relies on eliciting emotional responses in viewers rather than engaging their analytical capacities. Indeed, several viewers reportedly suffered heart attacks in the movie theater as a result of their strong emotional reactions to the film. Reber described several other examples of this use of emotion to rouse public sentiment, including George W. Bush’s politics of fear and the McDonald’s “i’m lovin’ it” advertising campaign. These and other observations led Reber to conduct an interdisciplinary examination of emotion’s rise in cultural prominence. Evidence for this trend comes from advances in understanding the neural bases of emotion, greater focus on the role of emotion in politics and decision-making, and the development of alternative ways of characterizing cultural experience that do not rely exclusively on verbal language. Reber suggested that a key factor in the shift toward a more affectively driven culture was capitalism. The notion of the invisible hand, originally coined by the economist Adam Smith to describe the self-regulating nature of the marketplace, stands in contrast to the more hierarchical, taxonomic qualities associated with reason and rationality. The fall of the Soviet Union and the triumph of capitalism, Reber asserted, opened the door to valuing affective experience, and the notion of thinking through feeling, at a broad cultural level.

Perhaps the most obvious connection between the two speakers’ presentations was the emergence of interest in affect across different fields. Bachevalier noted that early research in neuroscience privileged “cold” cognitive processes, and that it was not until much later that the role of affect in continually (and often unwittingly) modulating such processes (“hot” cognition) was appreciated. Similarly, one of Reber’s main points was that capitalism enabled a redefinition of emotion, previously regarded as chaotic or uncultivated, into positive terms. This change in characterization led to the valorization of emotion as a primary way of conveying meaning across a wide range of cultural forms. For example, as suggested by an audience member, social movements were once regarded as driven by reason, but when people no longer saw themselves as fundamentally rational beings (as in the movements of the 1960s), emotion became the catalyst. Reber cited the characterization of current popular protests across the world as movements of “indignation” as evidence that we are perceiving and analyzing social action in emotional terms.

One interesting issue raised in discussion was the idea that the study of emotion may inevitably, and perhaps paradoxically, require the tools of reason. For example, neuroscientists use systematic, quantitative methods to break down affective processes into specific components, and literary critics often describe emotion with language far removed from the vividness of affective experience. While Bachevalier pointed to the difficulty of separating emotion from cognition in scientific research, Reber suggested that the humanities may allow for greater variation in the language used to describe emotion, including the approach of purposefully avoiding conclusions and causal links as an alternative to rational discourse. Nevertheless, the methods primarily used to study emotion may serve the essential purpose of distancing researchers from their own emotions. Given that emotional signals can influence behavior even outside of awareness, it may be necessary for all disciplines to have precautions in place to avoid bias. As someone interested in the evocative power of language, I find it particularly noteworthy that language’s inability to capture our rich affective experience may be precisely what facilitates progress in the scholarly understanding of emotion.

Posted in 2011 Archives | Leave a comment

What is Language?

Tuesday, October 4, 2011

Robert McCauley and Susan Tamasi

CLICK HERE FOR LINK TO PODCAST

Last spring, Dr. Susan Tamasi (Linguistics) and the CMBC’s own Dr. Robert McCauley co-taught the CMBC-sponsored undergraduate course Language, Mind, and Society (LING 301), a required course for the Linguistics major and the joint major in Psychology and Linguistics. On the first day of class, students were asked to complete a short writing assignment, tackling the question “What is language?” Fittingly, this question also served as the launching point for discussion when Tamasi and McCauley reunited for the CMBC’s first lunch seminar of the fall semester on Thursday, September 22. Following opening remarks from each speaker, student and faculty attendees across a range of disciplines offered their own insights on the nature of language. In this commentary, I’ll highlight some of the key issues raised in the speakers’ presentations and in the lively group discussion that ensued, while also weaving in my own thoughts and reactions.

In her opening remarks, Tamasi noted the difficulty of coming up with any single definition of language. Various definitions offered by students in LING 301 focused on language’s role as a biological construct, a reflection of thought, a transmitter of information, and a marker of social identity, with some arguing that these characteristics render language a uniquely human trait. Tamasi suggested that defining language may not simply be a matter of combining all of these characteristics; rather, there may be some additional, perhaps intangible, property yet to be identified that captures what language is at its core. The fundamental nature of language, Tamasi went on to suggest, cannot be identified merely by examining individual languages, nor by focusing exclusively on the structure of language apart from how it is used by individual speakers and communities.

McCauley’s opening remarks centered on defining language as an abstract concept, or system, akin to the study of religion, as opposed to individual religions. Drawing on his perspective from the philosophy of science, McCauley suggested that explanations of systems are most fruitful when they are mechanistic; that is, when they define the parts, their organization, and their contributions to the operation of the system. Chomsky’s conception of language as primarily a system of thought, rather than a medium of communication, lends itself to such mechanistic analysis (even if Chomsky himself has not pursued that end). According to McCauley, psychological accounts of language have the benefit of localizing the mechanisms underlying language to a physical substrate, namely the brain, with advances in the study of cognition in turn contributing to our understanding of language. McCauley also characterized language as one of several maturationally natural systems, defined in part by their significance in addressing the basic problems in life, their comparatively early appearance in development (within the first two decades of life), the automaticity with which they are engaged, and their lack of dependence on culturally distinctive support.

One issue that came up in discussion was how these criteria apply in cases of so-called feral children (e.g., “Genie”), those who grow up isolated from human contact from a very young age and who consequently are unable to develop normal language abilities, among other cognitive skills. If language does not depend on culturally distinctive support, it might seem that such children should be able to acquire language, certainly once discovered and exposed to the social world. According to McCauley, however, invoking “culture” in such cases may be overly grandiose; what feral children lack may be reduced simply to the opportunity to interact with one other conspecific. Although interactions between individuals must be critical to the typical development of language, they constitute culture in only its most minimal sense. McCauley’s point is that there is nothing special about the social interaction necessary for acquiring language, only that it must be present in some form. While parsimonious, this line of reasoning does not address the question, what is culture? At what point does a particular type of experience become distinctively cultural rather than merely providing the “bare bones” of culture? For ethical reasons, it has often been regarded as impossible to isolate the social ingredients essential for acquiring language, but perhaps greater understanding of the cognitive mechanisms supporting various social phenomena may shed light on what types of social experience are necessary versus merely ornamental.

Another discussion point concerned the extent to which language is unique to humans. Non-human animals certainly have communication systems (and some have even been successful at acquiring sizeable vocabularies), but Tamasi suggested that equating such systems with human language assumes that the principal function of language is to communicate information. Instead, Tamasi stressed, we must take seriously the structural complexity of language at multiple levels (e.g., phonological, morphological, syntactic, etc.), which allows for the coordination of thought and a level of generativity unseen in other communication systems. But do such systems possess a hidden complexity that we cannot recognize simply from observable behaviors? Perhaps, but while we cannot zoom in on the structure of such systems directly, there is one place we can look: the brain. We might posit that the “waggle dance” of the bee, which serves to recruit other bees to forage in the same area, is incredibly complex, but it is unclear how the neural system of the bee could support such complexity, particularly given that we now have some understanding of how properties of human language like recursion are neurally instantiated. As McCauley noted, knowing where to look for the mechanisms underlying language offers a methodological opportunity to understand the nature of linguistic complexity. This opportunity may be wasted if we insist on parity across species.

Interestingly, however, many linguists assume parity across individual languages. Although one language might be regarded as more complex than another in some aspect of its structure (with consequences for how easily such structure is learned), languages are assumed to be equally complex overall because they all serve the same purpose. But if languages are defined by their complexity, this argument seems circular. If some recently discovered language was found to be less complex overall, we would have to conclude that it was not in fact a language (or, alternatively, that its complexity had yet to be discovered). I wonder if it would benefit the field to abandon the notion that there are necessary and sufficient features for language, given the observation from cognitive science that it is virtually impossible to define necessary and sufficient features foranything, whether it be chairs, games, or ideas. Acknowledging a continuum of complexity in language would not render the construct “language” meaningless; it would merely suggest that there is no clear distinction between what counts as a language and what doesn’t. Of course, given that language has often been viewed as a window into the mind, abandoning the parity assumption might open another can of worms by implying an extreme form of linguistic relativity, or cognitive differences among speakers of different languages. Nevertheless, recent research indicates that there is no simple one-to-one mapping between language and the conceptual system (e.g., Malt et al., 2011; for a review, see Wolff & Holmes, 2011), suggesting that differences in linguistic complexity do not necessarily entail analogous cognitive differences.

One important aspect of what defines language seemed to be missing from the lunch discussion: the world. As cognitive scientists have noted, the world is richly structured, with inherent discontinuities in how properties are distributed. For example, concrete objects like dogs and tables form more coherent perceptual bundles than relational notions typically encoded in verbs and prepositions (e.g.,throw and in). This structure constrains which components of meaning are encoded in language more generally and also serves as a standard of comparison when considering semantic variation across languages. Recent evidence suggests that universal properties of human perceptual experience may lead to a conceptual space largely shared across languages, with different languages partitioning the space differently. Our understanding of language may be enriched by considering the complex interactions among mind, world, and society that give rise to this fundamental human capacity.

References

Malt, B. C., Ameel, E., Gennari, S., Imai, M., Saji, N., & Majid, A. (2011). Do words reveal concepts? In L. Carlson, C. Hölscher, & T. Shipley (Eds.),Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 2884-2889). Austin, TX: Cognitive Science Society.

Wolff, P., & Holmes, K. J. (2011). Linguistic relativity. Wiley Interdisciplinary Reviews: Cognitive Science2, 253-265.

Posted in 2011 Archives | Leave a comment