Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-11T09:19:35.216Z Has data issue: false hasContentIssue false

Accurate perceptions do not need complete information to reflect reality

Published online by Cambridge University Press:  22 March 2017

Shabnam Mousavi
Affiliation:
The Johns Hopkins Carey Business School, Washington, D.C. 20036shabnam@jhu.eduhttp://carey.jhu.edu/faculty-research/directory/shabnam-mousavi-phd/ Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, 14195 Berlin, Germanymousavi@mpib-berlin.mpg.dehttps://www.mpib-berlin.mpg.de/en/staff/shabnam-mousavi
David C. Funder
Affiliation:
Department of Psychology, University of California, Riverside, Riverside, CA 92521. david.funder@ucr.eduhttp://www.psych.ucr.edu/faculty/funder/

Abstract

Social reality of a group emerges from interpersonal perceptions and beliefs put to action under a host of environmental conditions. By extending the study of fast-and-frugal heuristics, we view social perceptions as judgment tools and assert that perceptions are ecologically rational to the degree that they adapt to the social reality. We maintain that the veracity of both stereotypes and base rates, as judgment tools, can be determined solely by accuracy research.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Jussim (Reference Jussim2012) argues that social perceptions about individual members of a group often reflect objective social reality (p. 19), and that evaluation of social perceptions requires testing their accuracy against empirical data. From a scientific point of view, his argument is downright anodyne, but in the current research zeitgeist it can and often does come off as radical. The stated goal of much social psychological research is to identify shortcomings in judgment that create misperceptions of members of disadvantaged groups, and even (as in the case of “self-fulfilling prophecies”) may exacerbate their objective disadvantages. Jussim's thesis is that scientific research needs to rise above mere advocacy, and objectively examine the degree to which judgments are and are not accurate in realistic settings, and measure rather than assume the consequences of these judgments.

Jussim is hence irritated and puzzled by objections to the usefulness of accuracy research in social and cognitive psychology. He provides three reasons for why most research emphasizes error over accuracy, and even sometimes ignores the very possibility of accurate judgment. First, some researchers surrender to the appeal of seemingly dramatic results from lab studies of errors and biases, without assessing how these results apply to real world contexts. As Jussim puts it: “But, metaphorically, does man really bite dog more often than man walks dog (i.e., do error and bias dominate accuracy)? Maybe so, but the only way we will ever find out is by conducting both error/bias research and accuracy research” (p. 152). Second, the “intellectual imperialism” that demands all research address process models while neglecting the content of what is judged has shifted research focus from assessing accuracy to the why, where, and how of presumed inaccuracy. This shift of focus is attractive to many researchers because it allows politically incorrect views to be targeted as the cause of social maladies – protecting researchers from any accusations that they are “blaming the victims” (p. 153). Third, unwarranted extensions of Gage and Cronbach's (Reference Gage and Cronbach1955) demonstration of statistical complications associated with methods of assessing accuracy led many to incorrectly conclude that accuracy research had hit a dead end.

We agree with Jussim in that social perceptions are more often accurate than not and that the imperialism of the “error paradigm” has led to a widespread, distorted view of human judgment (Funder Reference Funder1987; Reference Funder1995a; Krueger & Funder Reference Krueger and Funder2004). We further observe that the stance taken by error/bias studies with respect to accuracy research is rooted in upholding the narrow notion of rational expectations. In contrast, ecological rationality (Gigerenzer Reference Gigerenzer2005; Gigerenzer & Todd Reference Gigerenzer, Todd, Plott and Smith2008; Mousavi & Gigerenzer Reference Mousavi, Gigerenzer, Hofmann and Frese2011) provides a fruitful framework for a holistic study of human judgment. In an inconsistent (with respect to rational expectations) but highly efficient manner people seek confirmatory information and ignore some relevant information while simultaneously asking diagnostic questions (p.117), and interestingly end up with functionally accurate perceptions.

This is how the study of ecological rationality of fast-and-frugal heuristics (Neth & Gigerenzer Reference Neth, Gigerenzer, Scott and Kosslyn2015; Todd et al. Reference Todd and Gigerenzer2012) offers a framework within which the accuracy of social perceptions can be understood. Fast-and frugal heuristics are efficient rules that produce usually-accurate judgments on the basis of incomplete and uncertain information – and in the real world, information is always incomplete and uncertain to some extent. Ecological rationality appears as a match between the heuristic strategy and the environment where it has been used (Gigerenzer et al. Reference Gigerenzer and Todd1999). Superimposing this framework on social perceptions as judgment tools implies this basic operational definition: A perception is ecologically rational to the degree that it adapts to the social reality. An ecologically rational perception generates good judgment most of the time. When beliefs are accurate, efforts to change those beliefs will not resolve any social problems. Most likely such efforts will hinder the diagnosis of true causes for the problems and initiate a cascade of further incorrect judgments. Once this is acknowledged, intervention efforts can be correctly channeled to combat the real rather than putative causes of social problems.

In this spirit, we second Jussim's endorsement of Kelly's (Reference Kelly1955) notion of “people as naïve scientists” who use the uncertain and incomplete information available to them to build probabilistic beliefs about the nature of their social world. This notion builds on the Brunswikian account (Brunswik Reference Brunswik1952) of accurate perception requiring one to choose, from the wide array of cues available in any setting, the ones that are actually relevant to or diagnostic of the attribute that is being judged (p. 146). In situations where social reality is inherently unspecifiable because of irreducible uncertainty, approximations and heuristics such as stereotypes provide the flexibility needed for making judgments that are good enough for practical purposes.

Although Jussim agrees that everyone is subject to a mild level of naïve realism (assuming that one's judgment, belief, or perception is correct), he forcefully disagrees that this naiveté dominates social perceptions (p.14). We posit that the social reality of a group emerges from interpersonal perceptions and beliefs put to action under a host of environmental conditions. In doing so, we join Jussim in rejecting the unjustified notion of interpersonal expectations that powerfully create their own reality (pp. 76, 83). Even though social reality is a multidimensional phenomenon, specific characteristics of the individuals and their groups can be teased out, studied, and documented to constitute the elements of the corresponding social reality. A typical phenomenon studied widely in this area is stereotypes, which are classically viewed as biased expectations (p. 66). Jussim offers a compelling account of research on stereotypes and points out that their accuracy often goes unassessed and their influence on judgment is often exaggerated. However, he overlooks a key paradox in this area: Social cognition research prominently examines two effects concerning beliefs about groups, and the two effects are antagonistic.

It is important to recognize that a stereotype is a psychological construct; specifically, it is a belief about the properties of a category or group. In this way, it is exactly the same thing as a “base rate,” which as a psychological construct is also a belief about the properties of a category or group (Funder Reference Funder, Jussim, Lee and McCauley1995b). This is where the paradox arises: A vast body of research, much of which is cited by Jussim, finds (or at least claims) that stereotypes are overused to the point that properties of individuals become unfairly ignored. But another body of research, pioneered by Kahneman and Tversky and almost as large, finds (or at least claims) that base rates are underused to the point where they are completely overwhelmed by salient properties of individual cases or persons (Tversky & Kahneman Reference Tversky, Kahneman, Kahneman, Slovic and Tversky1982).

How is this paradox maintained? For one, the two effects are rarely talked about in the same breath: although both are covered in every social cognition textbook, they are described in different terms and safely segregated from each other in different chapters. Another means is the way research is conducted: In research on stereotypes, the categorical belief is typically held to be wrong. As a result, any use of this information whatsoever will tend to make the resulting judgment less accurate. In research on base-rate neglect, the base rate is unquestioningly deemed correct. When Tversky and Kahneman (Reference Tversky, Kahneman, Kahneman, Slovic and Tversky1982) tell you how many red and green taxicabs are in the city, the information is taken to be dead-certain. As a result, any failure to use this information fully will tend to make the resulting judgment less accurate. The final result, therefore, is that the two seemingly contradictory bodies of research can and do yield findings congruent with Jussim's general theme: beliefs about categories are used in judgment to some degree, but properties of individual exemplars are influential too. It's just that when this belief is called a “stereotype” the conclusion is reached that it is tragically overused, whereas when it's called a “base rate” the conclusion is reached that it is woefully underused. In both cases, of course, the overall conclusion that is reached is that people are inaccurate.

To conclude, we revisit an example from Jussim's book to bring together our two points of discussion: the neglected common ground between separately studied phenomena such as stereotypes and base rates, and the potential of ecological rationality research as a framework for developing a more holistic view and approach to the study of humans’ beliefs, perceptions, and judgment.

Let's say that Ben believes Joe is hostile. This “objection” focusing on the accuracy of explanations [as opposed to accuracy of perceptions] leads to at least four different questions: (1) Is Ben right? (2) What is Ben's explanation for Joe's hostility? (3) If Joe is hostile, how did he get that way? (4) Why does Ben believe Joe is hostile? Providing an answer to one question provides no information about the others. (Jussim Reference Jussim2012, p.159)

The answer to question (2) might be provided by referring to a stereotype, and the answer to question (4) could be viewed as a case of base-rate fallacy. Nonetheless, both fall in the realm of explanations and not of verification such as in question (1). Whereas question (1) requires empirical investigation of accuracy, the other questions have led to other research programs that are not directly concerned with accuracy. The question that the study of the ecological rationality of judgment rules raises is whether separating “ought” from “is” when studying human judgment can be meaningfully maintained. The answer it implies is a resounding “no” (Gigerenzer & Todd Reference Gigerenzer, Todd, Todd and Gigerenzer2012).

The fact that the generally accurate judgment of reality does not require gathering complete or certain information allows stereotypes and base rates to be seen as structurally similar phenomena in the study of human judgment, where both are simply beliefs held about a social group: They should be employed to the extent they are accurate, and ignored to the extent they are not. Beliefs held by people about social groups are not necessarily completely wrong, as the dominant definition of stereotype suggests or even assumes. On the other hand, base rates are also beliefs held about the properties of a group. Lab experiments focused on demonstrating the base-rate fallacy do not necessarily indicate whether using base rates more strongly in real life would improve or harm the accuracy of social judgment. In both cases, the critical question is whether the belief – whether called a stereotype or a base rate – is correct. And that is an empirical question, one largely neglected in social psychology but to which Jussim argues renewed attention should be paid. Such research might not be as attention-getting as claims that biases overwhelm human judgment, or that social realities are manufactured out of nothing by human misperceptions. But it will gather the information needed to help people make better, more accurate judgments in the future and, in the long run, be the surer path to alleviating social ills.

In short, where “Dog bites man!” makes for a sexy headline, scientific attention seems to benefit from a nudge towards focusing on the more humble but important occurrence of “Man walks dog.”

References

Brunswik, E. (1952) The conceptual framework of psychology. University of Chicago Press.Google Scholar
Funder, D. C. (1987) Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin 101:7590.Google Scholar
Funder, D. C. (1995a) On the accuracy of personality judgment: A realistic approach. Psychological Review 102:652–70.Google Scholar
Funder, D. C. (1995b) Stereotypes, base rates, and the fundamental attribution mistake: A content-based approach to judgmental accuracy. In: Stereotype accuracy: Toward appreciating group differences, ed. Jussim, L., Lee, Y.-T. & McCauley, C., pp. 141–56. American Psychological Association.Google Scholar
Gage, N. L. & Cronbach, L. J. (1955) Conceptual and methodological problems in interpersonal perception. Psychological Review 62:411–22.CrossRefGoogle ScholarPubMed
Gigerenzer, G. (2005) I think, therefore I err. Social Research 72:195218. (Reprinted in Psychologica 2006:93–110).Google Scholar
Gigerenzer, G. & Todd, P. M. (2008) Rationality the fast and frugal way: Introduction. In: Handbook of experimental economics results: Vol. 1 (Handbooks in Economics No. 28), ed. Plott, C. R. & Smith, V. L., pp. 976–86. North-Holland.Google Scholar
Gigerenzer, G. & Todd, P. M. (2012) Ecological rationality: The normative study of heuristics. In: Ecological rationality: Intelligence in the world, ed. Todd, P. M., Gigerenzer, G. & the ABC Research Grouppp. 487–97. Oxford University Press.Google Scholar
Gigerenzer, G., Todd, P. M. & the ABC Research Group (1999) Simple heuristics that make us smart. Oxford University Press.Google Scholar
Jussim, L. (2012) Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. Oxford University Press.Google Scholar
Kelly, G. A. (1955) The psychology of personal constructs: Vol. I. A theory of personality. W.W. Norton.Google Scholar
Krueger, J. I. & Funder, D. C. (2004) Towards a balanced social psychology: Causes, consequences, and cures for the problem-seeking approach to social behavior and cognition. Behavioral and Brain Sciences 27(3):313–27.Google Scholar
Mousavi, S. & Gigerenzer, G. (2011) Revisiting the “error” in studies of cognitive errors. In: Errors in organizations, ed. Hofmann, D. A. & Frese, M., pp. 97112. Taylor & Francis.Google Scholar
Neth, H. & Gigerenzer, G. (2015) Heuristics: Tools for an uncertain world. In: Emerging trends in the social and behavioral sciences, ed. Scott, R. & Kosslyn, S., pp. 118. Wiley. Available at: http://doi.org/10.1002/9781118900772.etrds0394 Google Scholar
Todd, P. M., Gigerenzer, G. & the ABC Research Group (2012) Ecological rationality: Intelligence in the world. Oxford University Press.CrossRefGoogle Scholar
Tversky, A. & Kahneman, D. (1982) Evidential impact of base rates. In: Judgment under uncertainty: Heuristics and biases, ed. Kahneman, D., Slovic, P. & Tversky, A., pp. 153–60. Cambridge University Press.Google Scholar