Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T15:02:41.800Z Has data issue: false hasContentIssue false

But is it social? How to tell when groups are more than the sum of their members

Published online by Cambridge University Press:  26 October 2016

Allison A. Brennan
Affiliation:
Department of Psychology, Simon Fraser University, Burnaby, BC V5A 1S6, Canadaallison_brennan@sfu.ca
James T. Enns
Affiliation:
Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada. jenns@psych.ubc.ca

Abstract

Failure to distinguish between statistical effects and genuine social interaction may lead to unwarranted conclusions about the role of self-differentiation in group function. We offer an introduction to these issues from the perspective of recent research on collaborative cognition.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

Baumeister et al. argue that self-differentiation is key to understanding why groups sometimes perform better and other times worse than the sum of their individual members. We do not challenge this hypothesis. Instead, we argue that it is important to use the appropriate measures when assessing the influence of social processes on group function. If this is not done, one may conclude that synergistic social effects are at play, when in fact group performance is merely what would be expected from statistical aggregation.

Failures to distinguish social interaction from statistical effects can be seen in studies of the wisdom of crowds and perceptual discrimination. At first blush it may seem remarkable that the average estimate of the weight of an ox by a crowd is more accurate than the best individual estimate (Galton Reference Galton1907; Surowiecki Reference Surowiecki2004), and that dyads who communicate their confidence to one another outperform the same persons working alone to detect visual signals (Bahrami et al. Reference Bahrami, Olsen, Latham, Roepstorff, Rees and Frith2010). Yet a closer look at these studies reveals that they may not be measuring social synergy because they compare group performance to a benchmark that does not account for statistical facilitation. Indeed, research has shown that similar effects can be obtained without social collaboration. Wise crowds can be produced by statistical aggregation alone: Averaging multiple estimates increases precision (Soll Reference Soll1999), even when it is the same individual providing repeated estimates (Herzog & Hertwig Reference Herzog and Hertwig2009; Lewandowsky et al. Reference Lewandowsky, Griffiths and Kalish2009). Also, the signal detection benefit by dyads can be replicated by simply selecting the response of the more confident individual (Koriat Reference Koriat2012).

It is therefore critical to distinguish between two potential types of group effects (both benefits and costs): those that accrue as a consequence of aggregating independent correct responses and errors (statistical facilitation) versus those that arise from communicating information or influence between individuals (social interaction). We caution against generalizing the self-differentiation hypothesis without first distinguishing statistical facilitation from social interaction.

We recently made this distinction in a study of cognitive collaboration, an emerging field that aims to understand social influences on visual attention (Böckler et al. Reference Böckler, Knoblich and Sebanz2012), visual perception (Samson et al. Reference Samson, Apperly, Braithwaite, Andrews and Scott2010), long term memory (Weldon & Bellinger Reference Weldon and Bellinger1997), and language (Tylén et al. Reference Tylén, Weed, Wallentin, Roepstorff and Frith2010). In our studies (Brennan & Enns Reference Brennan and Enns2015a; Reference Brennan and Enns2015b), we adapted the race model inequality (RMI; Miller Reference Miller1982; Ulrich et al. Reference Ulrich, Miller and Schröter2007) to compare group performance to a model that accounts for the expected statistical effects of aggregating individual contributions. We demonstrate how this differs from other measures in Table 1.

Table 1. A toy data set of response times on four trials. Values are in seconds.

Some would see this as evidence of social interaction because the team mean (2.40) is faster than the mean of the fastest individual on each trial ((2.50 = 2.00 + 2.00 + 4.00 + 2.00) / 4; e.g., Brennan et al. Reference Brennan, Chen, Dickinson, Neider and Zelinsky2008). Others would make the same claim because the team mean (2.40) is faster than the mean of the fastest individual overall (2.75 = Person A; e.g., Bahrami et al. Reference Bahrami, Olsen, Latham, Roepstorff, Rees and Frith2010). In contrast, the RMI compares individuals and teams in three steps: (1) All four trials of Person A and B are combined in a single distribution of eight values. (2) These values are ranked in ascending order. (3) The mean of the fastest four values represents the model of statistical facilitation, the best possible outcome of aggregating the independent contributions of two individuals over four trials. This value is 2.25 ((2.00 + 2.00 + 2.00 + 3.00) / 4). Because the team mean (2.40) is slower than this estimate based on statistical facilitation, these data provide no evidence that a collaborative social process occurred during this group task; the conclusion of social interaction is unwarranted. Note the same logic applies when accuracy is the main measure (Lorge & Solomon Reference Lorge and Solomon1955).

This logic has consequences for Baumeister et al.'s hypothesis. Although the studies reviewed may hold evidence for social interaction in addition to statistical facilitation, to date most have not used analyses that warrant this interpretation. Consider research on the division of labor, where groups that divide tasks across their members are more efficient than individuals completing the full task (West Reference West1999). Much efficiency is likely gained through social interaction, but its true extent is unknown. To determine that, researchers would have to consider the variability of individual task completion times, and assess individual and group efficiency in equal detail, as illustrated in Table 1. Now consider social loafing research, where groups pulling ropes (e.g., Ringelmann Reference Ringelmann1913b) are less powerful that the sum of individual member pulls. Because group performance here does not exceed the summed individual estimate, which is even more conservative than the RMI, we can safely assume that social interaction here reduces group performance below the statistical facilitation standard. But, again, we do know by how much without measuring the distribution of individual efforts with the same rigor used to measure group function.

These two possibilities – group effects that are the result of statistical facilitation versus social interaction – have different implications for the self-differentiation hypothesis. Self-differentiation may only be relevant for one type of effect, for the other, or for both. For example, if the self-differentiation effect in a given group context is driven by statistical facilitation, then one should focus on how self-differentiation influences individual member contributions. Alternatively, if social interaction is involved, it is important to understand how self-differentiation alters the flow of information and influence from one group member to another. In this case, self-differentiation influences group dynamics, over and above any influence on individual members.

We believe this distinction between social interaction and statistical facilitation is critical to understanding how groups differ from one another and from individual efforts. As such, we provide this commentary as an introduction to methods that can distinguish between these two types of effect. This will allow researchers to more accurately hone in on the role of self-differentiation in groups.

References

Bahrami, B., Olsen, K., Latham, P. E., Roepstorff, A., Rees, G. & Frith, C. D. (2010) Optimally interacting minds. Science 329:1081–85.CrossRefGoogle ScholarPubMed
Böckler, A., Knoblich, G. & Sebanz, N. (2012) Effects of co-actor's focus of attention on task performance. Journal of Experimental Psychology: Human Perception and Performance 38(6):1404–15.Google Scholar
Brennan, A. A., & Enns, J. T. (2015a) What's in a friendship? Partner visibility supports cognitive collaboration between friends. PloS one 10(11): e0143469.CrossRefGoogle Scholar
Brennan, A. A. & Enns, J. T. (2015b) When two heads are better than one: Interactive versus independent benefits of collaborative cognition. Psychonomic Bulletin & Review 22:1076–82.CrossRefGoogle ScholarPubMed
Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B. & Zelinsky, G. J. (2008) Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition 106:1465–77.CrossRefGoogle ScholarPubMed
Galton, F. (1907) Vox populi. Nature 75:450–51.CrossRefGoogle Scholar
Herzog, S. M. & Hertwig, R. (2009) The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping. Psychological Science 20(2):231–37.CrossRefGoogle ScholarPubMed
Koriat, A. (2012) When are two heads better than one and why? Science 336:360.CrossRefGoogle ScholarPubMed
Lewandowsky, S., Griffiths, T. L. & Kalish, M. L. (2009) The wisdom of individuals: Exploring people's knowledge about everyday events using iterated learning. Cognitive Science 33(6):969–98.CrossRefGoogle ScholarPubMed
Lorge, I. & Solomon, H. (1955) Two models of group behavior in the solution of eureka-type problems. Psychometrika 20(2):139–48.CrossRefGoogle Scholar
Miller, J. (1982) Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14:247–79.CrossRefGoogle ScholarPubMed
Ringelmann, M. (1913b) Research on animate sources of power: The work of man. Annales de l'Institut National Agronomique, 2e serie-tome XII:140.Google Scholar
Samson, D., Apperly, I. A., Braithwaite, J. J., Andrews, B. J. & Scott, S. E. B. (2010) Seeing it their way: Evidence for rapid and involuntary computation of what other people see. Journal of Experimental Psychology: Human Perception and Performance 36(5):1255–66.Google ScholarPubMed
Soll, J. B. (1999) Intuitive theories of information: Beliefs about the value of redundancy. Cognitive Psychology 38:317–46.CrossRefGoogle ScholarPubMed
Surowiecki, J. (2004) The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. Anchor.Google Scholar
Tylén, K., Weed, E., Wallentin, M., Roepstorff, A. & Frith, C. D. (2010) Language as a tool for interacting minds. Mind and Language 25(1):329.CrossRefGoogle Scholar
Ulrich, R., Miller, J. & Schröter, H. (2007) Testing the race model inequality: An algorithm and computer programs. Behavior Research Methods 39(2):291302.CrossRefGoogle ScholarPubMed
Weldon, M. S. & Bellinger, K. D. (1997) Collective memory: Collaborative and individual processes in remembering. Journal of Experimental Psychology: Learning, Memory, and Cognition 23(5):1160–75.Google ScholarPubMed
West, A. (1999) The flute factory: An empirical measurement of the effect of division of labor on productivity and production. American Economist 43:8287. 10.1177/056943459904300109.CrossRefGoogle Scholar
Figure 0

Table 1. A toy data set of response times on four trials. Values are in seconds.