Situations matter; they have an effect on us all, great or small. We are told we should walk a mile in someone else’s shoes, to “look at it from my point of view,” and would never dream of swearing in the principal’s office. So why are we so quick to judge others’ behavior as if the situations they find themselves in are irrelevant? Imagine you are walking with a friend along a crowded city street. Just up ahead, the wind starts to gust and some money floats down from a woman’s pocket, falling to the ground directly behind her step. A man darts from the throng of people and rushes to pick up the cash as it begins to flutter in the breeze. He looks around at the gathered crowd, and then proceeds to chase after the woman, breathlessly handing over her lost money as he catches her. She thanks him graciously, and goes about her day, perhaps later telling a friend about how lucky she was to have her money returned to her by a stranger. The rest of us, as onlookers, are left with a difficult social puzzle to solve. Why did the man give back the money?
A few competing explanations may immediately come to mind. Is he a kind man who would do that for anyone? Is he perhaps an old-fashioned man, who might do that for a pretty girl, but not for another man? Or was he just doing what any of us would have done given the surrounding crowd of onlookers? Each of these possibilities appears quite plausible at first blush, and given the rather threadbare description of the event one would imagine that people would split on which of the explanations they prefer. After all, we really have very little to go on. Curiously though, when people actually reason about stories like this they most often prefer to explain the man’s behavior in terms of his traits. You might plump for him being a stickler for honesty, but you are very unlikely to think that he was just behaving how anyone would in that situation. Further, despite the complex and uncertain circumstances, these decisions are made very quickly and with great confidence.
This tendency—to assume that a person acts because of his or her dispositions, ignoring the influence of the situation—has the rather grand title of the fundamental attribution error (FAE; Ross, 1977). It describes the idea that we make attributions that are fundamental to the person’s character, and that these attributions often overlook clear situational causes of behavior. Someone sitting quietly on the bus, ignoring others, might be cast as an introvert and aloof, yet the social script for bus-sitting strongly discourages loud conversation with strangers (Wesselmann et al., 2012). Theorists argue that people attribute behavior to others’dispositions spontaneously (the honest man), only correcting their attributions deliberately (we all would have done the same) if they have the time and inclination to do so (Gilbert, 1998a). Social psychologists have done very well in explaining the circumstances that give rise to this error, but a clear account of why we do this has remained elusive.
New research in social neuroscience suggests that our propensity to look to the person and not the situation for the causes of their behavior may be the result of spontaneously thinking about their mental states. Work that I and my colleagues have done suggests that the FAE might be a byproduct of humans having a highly-evolved system that takes behaviors we see and converts them into models of the underlying mental states of the actors (Moran, Jolly, and Mitchell, 2014). This work used functional magnetic resonance imaging (fMRI) to show that higher activity in brain regions associated with representing others’mental states—specifically the medial prefrontal cortex (e.g., Amodio and Frith, 2006; Wagner et al., 2012)—predicted whether participants would say that a behavior they read about was caused by the person’s disposition, rather than by the situation the person found themselves in. That is, if a part of your brain specialized for thinking about others’mental states is spontaneously active when you see other people’s behavior (or read about it, in this case), you are more likely, when asked later, to explain their behavior in terms of their traits or dispositions.
Other possible explanations for the error have also emerged. For one, it might seem obvious that if we can’t see the situational forces at work then we would default to assuming that a guy who jumps ahead of us at the pharmacy is not a nice person. After all, we can’t know that he is racing home to give life-saving medicine to his sick daughter; all we can see is his upending of the social order. On this view, it’s not that we would necessarily default to viewing his behavior as imbued with his intentions, just that we are blinded to how the situation has constrained his behavior. Another possible idea for why the FAE happens is known as the just-world hypothesis. Lerner (1977) suggested that people are motivated to view the world such that it rewards people for good behavior and punishes them for bad behavior—that people get what they deserve in life. It is relatively common to hear people being blamed for being overweight, when physiological circumstances outside their control might contribute more than their apparent laziness (Gaesser, 2002). Here we commit the FAE because we assume that they’ve gotten what they deserve—that the situation (in this case a combination of environmental and physiological [non-conscious] factors) has no role in causing their obesity. A further possibility, originally raised by Gilbert, Pelham, and Krull (1988), is that the error is committed automatically, and only if we are willing or capable of expending effort to consider the influence of the situation, do we then correct our initial assumption. This possibility is quite compatible with the evidence from neuroscience. However, each of these ideas about why we ignore the situation doesn’t quite capture the original research on the FAE, which revealed that even when subjects are told directly that the situation caused the behavior, they often still explain the behavior in terms of dispositions.
In fact, the first work to demonstrate the FAE was designed to show that people can be swayed easily by situations (Jones and Harris, 1967). In Jones and Harris’seminal work, people read essays purportedly by American students that were for or against then Cuban leader, Fidel Castro. Being pro-Castro in the States was deeply taboo at the height of the cold war, and Jones and Harris felt sure that their readers would see through this ruse, and never believe that any right-minded student would willingly support Fidel Castro. In one condition, readers were told that the essay writers were free to choose their pro- or anti-Castro positions. They read the essays, and then judged whether the writers themselves were in fact pro- or anti-Castro. Jones and Harris rightly surmised that their subjects would impute that pro-Castro essay writers were pro-Castro in their private views, and that anti-Castro essay writers were not so in favor of Cuban policy. In the experiment’s second condition, however, different subjects were told that the writers’positions were chosen by a coin-flip. That is, the situation the (fake) writers had found themselves in led directly to their support for naked Communism. Jones and Harris’intrepid subjects were asked about the writers’true feelings about Castro, and this time the scientists intuited that the subjects would reason that the writers were simply following orders; that their essays extolling the virtues of forced rationing were written so because they had been given no choice. The experiment’s big surprise was that the results were just the same as in the first, freely chosen, condition: pro-Castro writers were viewed just as pro-Castro as the people who were free to choose their own topic. Participants failed to make use of the information regarding the strong situational forces on behavior, and erroneously inferred that the stance of the essay fully reflected the stance of the writer. Something unexpected was going on, and Jones and Harris had stumbled upon what became a bedrock finding in social psychology. Writing ten years later, Lee Ross coined the phrase the “fundamental attribution error” (1977) to describe both this result, and the many replications that had upheld the primary finding. Ned Jones, co-author of the original study, found the phrase misleading but was also miffed that he had not the forethought to create it himself (Gilbert, 1998b).
But what exactly was going on? Why would people who knew that a position had been given via a coin-flip confidently ignore that information and believe that the writers held such a socially undesirable position in their hearts? As previously mentioned, one possible reason for this error—a mechanism for it—is that people, when they encounter other people, are engaged in constant prediction about what those others will do next. Because these predictions are so adaptive, necessary, and automatic it might simply be second nature to have a well-traveled causal highway that links behavior and intentions—after all, if we know someone’s intentions, we can do a good job of figuring out what action they will take next—instead of thinking about the things in the environment that could also have caused people’s behaviors. The cognitive process that allows us to figure out intentions has been referred to as mentalizing or theory of mind, terms used almost interchangeably in the literature (Baron-Cohen, Leslie, and Frith, 1985). When we look to the cognitive neuroscience of mentalizing, we see that almost twenty years of research has converged on the idea that a particular set of brain regions is responsible for representing others’mental states and intentions. In one of the first functional imaging investigations of mentalizing (Fletcher et al., 1995), participants read short stories and answered questions about those stories that relied either on the understanding of mental states or of physical states. Fletcher and colleagues reasoned that these two kinds of stories would be broadly identical except that mental stories necessitate mentalizing, and that contrasting the two story types might reveal activity in brain regions that were specially engaged by this process, and not by other processes that are common to reading stories in general. What they found, to their surprise, was that a single region of the medial frontal gyrus was more active when participants read mental stories than when they read physical stories. This region of the brain is part of the prefrontal cortex (on its medial, or central surface), and is found directly behind the forehead, between and just above the eyes. Until Fletcher and colleague’s seminal work, little attention had been paid to this region’s function—a brain area which has undergone more development than almost any other in evolutionary terms, being twice as large in humans than in any of our great ape relatives (Semendeferi et al., 2001)—but since this original paper, hundreds of investigations have revealed a role for the medial prefrontal cortex in tasks that involve mentalizing or other related social-cognitive phenomena such as deception, moral judgments, and impression formation (Wagner et al., 2012).
The medial prefrontal cortex also forms a central component of the brain’s default mode network (Raichle et al. 2001; Andrews-Hanna et al., 2010), so called because this network is most metabolically active when we are at rest, unengaged in goal-directed tasks like memory or attention (Shulman et al., 1997). Because of this overlap between the so-called resting state and mentalizing, one theory that naturally arises is that much of what is charitably referred to as ‘rest’by cognitive neuroscientists may in fact be simulation of our social interactions, internally generated thoughts about our own lives, and so on (Andrews-Hanna et al., 2010b; Tamir and Mitchell, 2010; Whitfield-Gabrieli et al., 2011; Moran, Kelley and Heatherton, 2013). It is indeed likely that the sorts of ongoing mental predictions about how John will react to your news, or whether Sophie will like Jack, are those that, in the absence of a task when we are lying inside an MRI scanner (or in our real, daily lives) are the ones that we will default to considering (Wicker et al., 2003).
Interestingly, exactly this medial prefrontal region is the one whose activity preceded subjects’opting to interpret others’behaviors as resulting from their dispositions to act. The implication is that our prediction machinery engages automatically when we see people acting, and to the degree that we are thinking about others’ mental states we are also likely to ignore the constraining influence of situations on behavior. Because research has shown that people with autism (Baron-Cohen, Leslie, and Frith, 1985) and typical older adults (aged 65+, Moran, Jolly, and Mitchell, 2012) represent mental states less than do typical healthy younger adults, the surmised link between mentalizing and the FAE implies that these populations would not be as likely to commit the FAE. This prediction is yet to be tested. The idea that mentalizing causes the FAE is still a conjecture for now; we can’t know for sure that medial prefrontal activity caused people to make those attributions, only that it predicted them. We also can’t know for sure that this activation definitely implied that participants were representing mental states. Because fMRI can give us information only about how brain activity correlates with behavior, rather than causes it, this technique alone cannot answer the question of whether mental state representation leads to the FAE. To answer that question, scientists could turn to another technique known as transcranial magnetic stimulation (TMS). TMS uses brief magnetic pulses to increase or decrease neuronal activity in specific brain regions. Its most important contribution is in allowing scientists to see what cognitive changes happen when they introduce causal changes in brain activity. Scientists have used this technique to improve people’s ability to retrieve object names (Mottaghy et al., 1999) among other cognitive abilities. Another area where TMS has produced an interesting and provocative result is in moral judgments (Young et al., 2010). Young and colleagues (2010) applied TMS to a region of the temporoparietal junction that had previously been implicated in representing others’intentions (Saxe & Kanwisher, 2003). Their experiment found that TMS applied over this region reduced reliance on intentions in moral judgments; people who had attempted to harm another were judged less harshly than people who had accidentally harmed another –counter to the standard Western legal model of a person’s intentions weighing more than the outcome of their actions.
Perhaps TMS stimulation aimed at reducing medial prefrontal activity would make it less likely for study participants to commit the FAE, and thus more likely to see the influence of situations. Of course, few people would sign up for an experiment in which the scientists promised to impair their ability to predict and respond to others’mental states, and here lies an interesting closing point about mental state representation. Perhaps the very fact that we have evolved to be so hyper-aware of others’intentions bears with it the ironic cost of finding those intentions in the least useful of places; in the grumpy actions of the waiter who has just lost a parent, in the inconsiderate words of a woman at a paint counter whose house has just been defaced, and in the mind of the computers who steadfastly refuse to submit to “Control-P print”. On this basis, we should all be grateful that we can read minds at all, but let’s not forget that understanding one person’s antisocial behavior is just a different shoe away.
References
Andrews-Hanna, J. R., Reidler, J. S., Sepulcre, J., Poulin, R., & Buckner, R. L. (2010a). Functional-anatomic fractionation of the brain's default network. Neuron, 65, 550-562.
Andrews-Hanna, J. R., Reidler, J. S., Huang, C., & Buckner, R. L. (2010b). Evidence for the default network's role in spontaneous cognition. Journal of Neurophysiology, 104, 322-335.
Amodio, D. M., & Frith, C. D. (2006). Meeting of minds: The medial frontal cortex and social cognition. Nature Reviews Neuroscience, 7, 268-277.
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of mind”? Cognition, 21, 37-46.
Fletcher, P. C., Happe, F., Frith, U., Baker, S. C., Dolan, R. J., Frackowiak, R. S., & Frith, C. D. (1995). Other minds in the brain: A functional imaging study of “theory of mind”in story comprehension. Cognition, 57, 109-128.
Gaesser, G.A. (2002). Big fat lies: The truth about your weight and your health. Carlsbad, CA: Gurze Books.
Gilbert, D. T. (1998a). Ordinary personology. The handbook of social psychology, 2, 89-150.
Gilbert, D. T. (1998b). "Speeding with Ned: A personal view of the correspondence bias". In Darley, J. M. & Cooper, J. Attribution and social interaction: The legacy of E. E. Jones. Washington, DC: APA Press.
Gilbert, D. T., Pelham, B. W., & Krull, D. S. (1988). On cognitive busyness: When person perceivers meet persons perceived. Journal of Personality and Social Psychology, 54, 733-740.
Jones, E. E. & Harris, V. A. (1967). The attribution of attitudes. Journal of Experimental Social Psychology, 3, 1–24.
Lerner, M. J. (1977). The justice motive: Some hypotheses as to its origins and forms. Journal of Personality, 45, 1-52.
Moran, J. M., Jolly, E., & Mitchell, J. P. (2012). Social-cognitive deficits in normal aging. The Journal of Neuroscience, 32, 5553-5561.
Moran, J. M., Jolly, E., & Mitchell, J.P. (2014). Spontaneous mentalizing predicts the fundamental attribution error. Journal of Cognitive Neuroscience, 26, 569-576.
Moran, J. M., Kelley, W. M., & Heatherton, T. F. (2013). What can the organization of the brain’s default mode network tell us about self-knowledge? Frontiers in Human Neuroscience, 7.
Mottaghy, F.M., Hungs, M., Brügmann, M., Sparing, R., Boroojerdi, B., Foltys, H., Huber, W., & Töpper, R. (1999). Facilitation of picture naming after repetitive transcranial magnetic stimulation. Neurology, 53, 1806-12.
Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A., & Shulman, G. L. (2001). A default mode of brain function. Proceedings of the National Academy of Sciences, 98, 676-682.
Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. Advances in experimental social psychology, 10, 173-220.
Semendeferi, K., Armstrong, E., Schleicher, A., Zilles, K., & Van Hoesen, G. W. (2001). Prefrontal cortex in humans and apes: a comparative study of area 10. American Journal of Physical Anthropology, 114, 224-241.
Shulman, G. L., Fiez, J. A., Corbetta, M., Buckner, R. L., Miezin, F. M., Raichle, M. E., & Petersen, S. E. (1997). Common blood flow changes across visual tasks: II. Decreases in cerebral cortex. Journal of Cognitive Neuroscience, 9, 648-663.
Tamir, D. I., & Mitchell, J. P. (2011). The default network distinguishes construals of proximal versus distal events. Journal of Cognitive Neuroscience, 23, 2945-2955.
Wagner, D. D., Haxby, J. V., & Heatherton, T. F. (2012). The representation of self and person knowledge in the medial prefrontal cortex. Wiley Interdisciplinary Reviews: Cognitive Science, 3, 451-470.
Wesselmann, E. D., Cardoso, F. D., Slater, S., & Williams, K. D. (2012). To be looked at as though air: civil attention matters. Psychological Science, 23, 166.
Whitfield-Gabrieli, S., Moran, J. M., Nieto-Castañón, A., Triantafyllou, C., Saxe, R., & Gabrieli, J. D.E. (2011). Associations and dissociations between default and self-reference networks in the human brain. Neuroimage, 55, 225-232.
Wicker, B., Ruby, P., Royet, J. P., & Fonlupt, P. (2003). A relation between rest and the self in the brain? Brain Research Reviews, 43, 224-230.