1. Introduction
Much has been written about the nature of scientific inference and its bearing on innovation and discovery. It is often concluded by philosophers and scientists alike that creative, original thinking cannot be analyzed. It is prelogical (though not necessarily illogical) and often involves nonverbal representations and procedures. Recent studies have moved away from formal models of inference to focus instead on the models that scientists themselves construct. However, no general features emerge that could support a philosophical account of modeling. As Morgan and Morrison remark:
When we look for accounts of how to construct models in scientific texts we find very little on offer. There appear to be no general rules for model construction in the way that we can find detailed guidance on principles of experimental design or on methods of measurement. (Reference Morgan, Morrison, Morrison and Morgan1999, 12)
At the high level of abstraction that formal models of theory construction require, this observation is surely true. However, textbooks give misleading accounts of scientific practice. As Kuhn observed many years ago, they are the wrong place to look (Kuhn Reference Kuhn and Woolf1961, 33–37). Model construction is a cognitive skill acquired as a scientist becomes an accomplished participant in the methods that define a particular specialism. Modeling skills are learned by example—by seeing how models are constructed, used, and evaluated (Alac and Hutchins Reference Alac and Hutchins2004). This sort of knowledge is collective: it involves methods and evaluative criteria that are defined, used, and refined by a group. These methods and values are learned by participating in the practices of the group. Nevertheless, this knowledge is also personal. It must be acquired and mastered by individuals so that they can contribute to the knowledge-producing activity of the group.
Case studies provide important clues about inference making. Everyday human reasoning combines visual, auditory, and other sensory experience with nonsensory information and with verbal and symbolic modes of expression. Scientific reasoning is no different. Scientists use a wide range of images including photographs, visualizations of phenomena (e.g., sketches, diagrams, plots, and graphs), visual representations of theories about phenomena (such as block diagrams), and models that display structure and connectivity (such as physical models, stacks of plots, and virtual ‘stacks’ of images). In this variety a number of general features can be discerned. The general features point to common strategies that scientists use to define and solve problems. This in turn suggests that these practices invoke underlying human cognitive capacities. These include pattern recognition and the ability to move between two-dimensional, three-dimensional, and four-dimensional representations. Whereas the former are automatic, the latter are often intentional strategies that vary the cognitive demands of making image-based inferences. In this paper I provide an account of how visual models mediate between the interpretation of source data and the explanation of such data. I then consider how this relates to the distributed cognitive systems model of knowledge production.
2. Visualization and Cognition: Six Generalizations
The visual representations that scientists use display the following features:
1. Representations are usually hybrid, combining one or more of visual, verbal, numerical, or symbolic modes of representation. Examples include block diagrams in geology, camera lucida diagrams of fossil imprints, and maps. When a visual representation is not hybrid, it does not display an interpretation. Examples (shown in Figures 1 and 2) include x-ray diffraction photographs (Gooding Reference Gooding2004b) and images of fossil imprints (Gooding Reference Gooding2004a).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210920064928237-0036:S003182480000475X:S003182480000475X-fg1.png?pub-status=live)
Figure 1. Left, W. L. Bragg's photograph of the x-ray diffraction pattern produced by a simple crystal. Bragg's sketch of the experimental setup (Bragg Reference Bragg1913a, Figure 3). Center, The set of labeled points made by pricking through the photograph onto paper (Royal Institution, Bragg MS WLB86, courtesy of the Royal Institution of Great Britain, http://www.ri.ac.uk). Right, The projection diagram generated from these points (Bragg Reference Bragg1913b, Figure 4).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210920064928237-0036:S003182480000475X:S003182480000475X-fg2.png?pub-status=live)
Figure 2. Photo of an imprint for the arthropod Sidneyia inexpectans and corresponding camera lucida diagram, from Bruton (Reference Bruton1981, Figures 29, 31); used by permission of the Royal Society (http://www.royalsoc.ac.uk).
2. Representations are often multimodal, combining information derived from different sources that invoke different sensory modalities (Ziman Reference Ziman1968, p. 48; Gooding Reference Gooding1990; Tversky Reference Tversky, Conway, Gathercole and Coroldi1998). This is why scientists design surrogate sensors and computational systems to present information in a form that lends itself to human interpretation. Where knowledge representations cannot be integrated in this way, communication between research groups may be hindered, as in high-energy physics prior to the development of powerful data-driven process-visualization technologies (Galison Reference Galison1997).
3. Representations are plastic; that is, they are easy to vary. Variation is often exploratory, playful, and opportunistic. This is an important source of new insights and possibilities. Fine-grained studies are a rich source of examples here (Tweney Reference Tweney1992; Nersessian Reference Nersessian, Gorman, Tweney, Gooding and Kincannon2005). Variations arise both from mental operations by individuals and during communicative exchanges between individuals and groups (Galison Reference Galison1997; Henderson Reference Henderson1999).
4. Variation of representations often takes the form of transformations between 2-D forms (patterns and diagrams), 3-D forms (structures), and 4-D temporal or process representations. A diagrammatic abstraction from a photograph of a fossil or an x-ray, or of bubble chamber tracks, moves the eye and the mind from a barely interpreted visual source to a meaningful word image construct. Such ‘moves’ are motivated by the desire to understand and communicate that understanding. These motivations are a property of individuals, not of systems.
5. Representational plasticity is constrained as scientists develop shareable methods and technologies that govern the manipulation of mental images and drawn sketches, and as they bring theory to bear on the interpretation of the transformed images.
Plasticity is reduced by transformation rules. These may be articulated verbally or they may be embodied in techniques and technologies. For example, in order to interpret 2-D x-ray diffraction images, W. L. Bragg devised geometrical methods for developing a 3-D model from diagrams of the 2-D images. These stereographic projections could locate the planes of a crystal lattice in which the atoms diffracting the rays lie (see Figure 1). Together with analogies between optical and x-ray diffraction, these 3-D models provided an explanation in terms of crystal structure, of the distribution and sizes of the spots and smudges found in early x-ray diffraction images. Similarly in paleobiology, 3-D structures are constructed as interpretations of 2-D diagrams of photographs of fossil imprints (Figure 2). At first 3-D construction is done informally and mentally. It is enabled and disciplined by procedures such as optical projection techniques (Briggs and Williams Reference Briggs and Williams1981). These make 2-D sectional shadows of 3-D models for comparison to the diagrams of the source imprints (Figure 3). Finally, computational methods are devised to make transformations between 2-D and 3-D fossil representations (Doveton Reference Doveton1979). When a structural model (Figure 3) generates 2-D projections that match features of the diagram (Figure 2), it forms the basis for an explanation of the source imprints (Bruton and Whittington Reference Bruton and Whittington1983).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210920064928237-0036:S003182480000475X:S003182480000475X-fg3.png?pub-status=live)
Figure 3. Left, Graphical reconstruction. Drawing of 3-D sectional model of Sidneyia. Right, Physical reconstruction. View of physical model of Sidneyia, from Bruton (Reference Bruton1981, Figures 107, 101); used by permission of the Royal Society (http://www.royalsoc.ac.uk).
6. Examples such as these help clarify what it means to say that representations are distributed.
3. Distributed Representations
There are three ways in which a knowledge-bearing representation is distributed:
1. Its construction and use by people involve devices or machines, so it is distributed between minds and machines. Examples include visual computer displays of large numerical data sets such as functional magnetic resonance imaging scan images (Beaulieu Reference Beaulieu2001), field-intensity plots in geophysics (Heirtzler Reference Heirtzler and Phinney1968; Vine Reference Vine and Phinney1968), and computer-generated physical replicas of bones in osteoarchaeology (Lynnerup et al. Reference Lynnerup, Hjalgrim, Nielsen, Gregersen and Thuesen1997).
2. It is a hybrid, mental-material object used to enable visual-tactile thinking (Baird Reference Baird2004) or to guide some procedure, as in performing a mathematical operation. Examples abound: the image-projection procedures used by paleobiologists (see above) and, in virology, the manipulation of x-ray images of viral particles and models of particles in various orientations to create 2-D images of 3-D viral structures (Lauffer and Stevens Reference Lauffer and Stevens1968). Other well-known examples of object-based thinking are the physical mnemonics devised by Faraday to fix and communicate his interpretation of electromagnetic phenomena (Gooding Reference Gooding1990).
3. It represents knowledge that is produced and held in many different ways (e.g., by people, machines, and organizations) and at different levels of relationship—from human-object and human-machine dyads (Giere Reference Giere2004) to systems for directing and organizing research (Knorr-Cetina Reference Knorr-Cetina1999). Such knowledge production also relies on control systems to ensure that existing resources are available where needed and that new knowledge is passed to those able to evaluate and use it (Hutchins Reference Hutchins1995). The knowledge represented does not reside in any one of the contributing sources or elements of the system.
Established, public knowledge is distributed in all three ways. It relies on stable, shared representational practices and demands confidence both in the expertise and competence of other practitioners and in the reliability of machine-based procedures. The visualization examples show the importance both of variety in representation and of moving between different kinds of representation (internal and external). This shows that these features are interdependent. Scientists are well aware that a single representation is rarely ever adequate to the task of describing the phenomenology even of a tightly confined domain. Varied representations that capture different aspects of the phenomenon are vital to the creation of new ideas. According to the psychologist Howard Gruber, altering the modality of a representation is a means of discovering invariant properties. By moving “from visual imagery, to sketches, to words and equations explaining (i.e., conveying the same meaning as) the thinker is pleased to discover that certain structures remain invariant under these transformations: these are his ideas” (Gruber Reference Gruber, Sternberg and Davidson1994, 410–411). Although visual thinking is not the only approach to modeling, most modeling involves a visual element (de Chadarevian and Hopwood Reference de Chadarevian and Hopwood2004). So the study of visualization has much to tell us about modeling in science
The key aims of most sciences are to capture process and the invariant features of change, and to use the latter to explain the former. Scientists constantly attempt to escape the limitations of static, printed representations such as plots, state descriptions, and images by producing ones that can convey process as well as structure. This helps explain the ubiquitous role of visualizations. Although printed images are static, they can convey information in a form that humans can manipulate by mental transformation. This conjecture is supported by studies showing that scientists frequently vary the information content or capacity of their representations (Gooding Reference Gooding and Radder2003). This complex of transformations is summarized diagrammatically in Figure 4.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210920064928237-0036:S003182480000475X:S003182480000475X-fg4.png?pub-status=live)
Figure 4. Visual inference. The diagram shows typical transformations involved in pattern recognition (AB), generating a structural interpretation of a pattern (BC), using the structure to infer a process (CD), and validating the structure as a partial explanation of the pattern (DB).
The transformation of images can run both ways, depending on whether there is a need to reduce or increase the complexity and content of a representation. Moves from pattern to structure (arc
$B\rightarrow C$
) and from structure to process (arc
$C\rightarrow D$
) generate visual models having greater information content than their sources. Moves from unresolved phenomena to a set of features or relationships (a pattern or diagram), from process to structure, or from structure to pattern (arcs
$A\rightarrow B$
,
$C\rightarrow B$
,
$D\rightarrow C$
, and
$D\rightarrow B$
) generate new visualizations that usually have less information content but greater explanatory power than their sources.
4. Distributed Cognitive Systems
The distributed character of visualizations makes a point of contact between studies of visualization and the study of cognitive systems. Such systems emerged through the application of science to industrial production in the nineteenth and twentieth centuries. This required the application of industrial methods and systems to scientific research during World War II. To develop new technologies, wartime projects also developed new approaches to the integration of humans and machines in large-scale knowledge-production systems (Pickering Reference Pickering1995). The knowledge produced by them is distributed in each of the three ways defined in Section 3. Such systems combine different kinds of objects and entities—mental, verbal, visual, numerical, and symbolic representations; material technologies, designs, plans, and institutions—to manage production and regulate output (Goodwin Reference Goodwin1995; Hutchins Reference Hutchins1995).
Studies of visual thinking bear on our understanding of science as a distributed system in three ways:
1. They provide information about how people use techniques and technologies to manipulate images and to communicate with and about them. The generation and use of visual images in the examples (Figures 1–3) undermine the distinction between ‘internal’ and ‘external’ representations. In a truly hybrid cognitive system there is no dualism of subjective and collective knowledge (pace Knorr-Cetina Reference Knorr-Cetina1999, 25). Consider the Hubble telescope as a knowledge-producing system in which teams of scientists interpret computer-generated images. Although the most important representations appear to be the external representations on the computer screens (Giere Reference Giere2004), these technology-based images are no more important than visual mental images. The end process involves evaluating the implications of each Hubble image for knowledge claims about galaxies that are 13 billion years old. Such evaluation cannot be made without engaging mental processes. The fact that image manipulation is accomplished by mental as well as object-based methods does not prevent it from being a collective process. Generally, mental representations do become less important as scientists develop the external representations (sketches, doodles, diagrams, on-screen images) and shared practices that enable communication and collective reasoning, but this reasoning is about possibilities that emerge from mental transformations effected by individuals as well as machines. Finally, science is open to the possibility that new information will force a reevaluation of what external representations are supposed to represent. How could such representations be evaluated without recourse to mental processes? Interpretative and evaluative judgments require a type of expertise that we have failed to externalize in machines (Collins and Kusch Reference Collins and Kusch1998).
2. Studies of visual thinking show that representations are neither cognitively nor socially neutral. My interpretation of the different types of manipulation of visual representations is that scientists vary the complexity of representations to suit different cognitive capacities. Consider, for example, an arithmetic calculation done by mental arithmetic, with an abacus, with pencil and paper, and by a pocket calculator. Each combination of representation and procedure invokes a different set of cognitive capacities. The image transformations indicated in Figure 4 correspond to changes in the degree of complexity that is needed at different points in the discovery process. At some points in the process, pattern recognition is all-important, while at others more complex processes come to the fore, such as making structural and process models to interpret patterns. At other times, techniques are devised to discipline and normalize the transformations involved in generating visual models. Finally, theoretical considerations are invoked to evaluate and validate these models. Besides engaging different cognitive capacities, modifying a representation and changing the mode of representation also open up new ways of solving a problem. Some of these are more compatible with exemplary methods or with socially preferred traditions (e.g., of iconography) than others.
3. These points turn Latour's argument for the irrelevance of cognitive processes on its head. Latour argued that human cognition must be irrelevant to a theory of science (Reference Latour1986, 1) because while science has changed rapidly since it emerged over four centuries ago, no new form of rational human cognition has appeared. Studies of visualization demonstrate the recurrence of certain cognitive strategies across scientific domains and through time, notwithstanding the enormous changes in the power and sophistication of the technologies and organization of science.
5. Personal Knowledge and Distributed Knowledge
What has changed is the extent to which cognition is mediated and enabled by technology. Science involves devising technologies to assist thinking that is both creative and systematic. The fact that vast data sets can be visualized does of course make some kinds of research easier and has made many new kinds of science possible. These technologies extend analog modes of reasoning into new domains and enable scientists to apply them to more complex problems than was previously possible. Nevertheless, scientists still study images and manipulate 3-D simulation models, for example, in order to locate drug receptor sites (Catlow Reference Catlow1996). That these images are generated and manipulated in computers rather than in human minds does not diminish the importance of mind-based representations. Like the imaging technologies that preceded them, these tools re-present data in a form that individual humans can interpret and understand. Scientists have always devised such tools to aid, discipline, and communicate their thinking. While computing power supports larger and more complex cognitive systems, it has not diminished the importance of mind-based thinking with analog representations.
This does not relocate cognition ‘inside’ the head. Many cases of cognition in science depend crucially on being embodied and networked and so involve physical and social processes ‘outside’ the brain and body. What any individual scientist imagines, thinks, or believes that she knows has importance and is interesting only insofar as it draws on and contributes to a larger, collective enterprise. The common currency of that enterprise—discourse, images, arguments, articles, software, technologies, mathematical procedures—is external and is distributed. However, there is no case for assimilating all cognition to the material, technical, and cultural aspects of systems. The sociological assimilation of all knowledge to social relations and cultural traditions makes it difficult to explain how the larger, distributed system can deal with change or produce innovations. Although it is possible to devise systems that manage innovation as, for example, Thomas Edison did (Carlson and Gorman Reference Carlson and Gorman1990), the objectives of such systems exist in the minds of those who commission, design, organize, and manage their operation.