1. Introduction
Network analysis is a field of graph theory dedicated to the organization of pairwise relations. It provides a set of concepts for describing kinds of networks (e.g., small-world or random networks) and for describing and discovering organization in systems with many densely connected components.
Some philosophers describe network models as providing noncausal or nonmechanistic forms of explanation. Lange (Reference Lange2013) and Huneman (Reference Huneman2010), for example, argue that network models offer distinctively mathematical (and topological) explanations (sec. 3). Levy and Bechtel (Reference Levy and Bechtel2013) argue that network models describe abstract causal structures in contrast to detailed forms of causal explanation (sec. 4). Rathkopf (Reference Rathkopf2015) distinguishes network explanations from mechanistic explanations because network models apply to nondecomposable systems (sec. 6).
My thesis is that whether network models explain, how they explain, and how much they explain cannot be answered for network models generally but must be answered by fixing an explanandum phenomenon, considering how the model is applied to a target, and deciding what sorts of variables and relations count as explanatory (see Craver Reference Craver, Hütteman and Kaiser2014). Uses of network models to describe mechanisms are paradigmatically explanatory, even if the model is abstract, and even if the system is complex, because such models represent how a phenomenon is situated in the causal structure of the world (Salmon Reference Salmon1984). Network models might also provide distinctively mathematical explanations (those that derive their explanatory force from mathematics rather than, e.g., causal or nomic relations; see Lange Reference Lange2013). Using examples from network science, I illustrate two puzzles for any adequate account of such mathematical explanations—the problem of directionality (sec. 3) and the puzzle of correlational networks (sec. 5)—that are readily solved in paradigm cases by recognizing background ontic constraints on acceptable mathematical explanations.
2. Basic Concepts of Network Analysis
Network models are composed of nodes, standing for the network’s relata, and edges, standing for their relations. A node’s degree is the number of edges it shares with other nodes. The path length between two nodes is the minimum number of edges required to link them. In a connected graph, any two nodes have a path between them.
In regular networks, each node has the same degree. In random networks, edges are distributed randomly. Few real networks are usefully described as regular or random. In contrast to these special cases, small-world networks often have an underlying community structure because nodes are connected into clusters. The small-worldedness of a network is the ratio of its clustering coefficient (the extent to which nodes cluster more than they do in a random graph) to its average path length. This clustering yields networks with nearly decomposable modules (Simon Reference Simon1962; see also Haugeland Reference Haugeland1998), collections of nodes that have more and stronger connections with each other than they have with nodes outside the group. Nodes with a high participation coefficient have edges with nodes in diverse modules. Such nodes can serve as connecting hubs linking modules together. These and other concepts are mathematically systematized and have been used to represent epidemics, academics, indie bands, and airports, to name just a few.
3. Directions of Mathematical Network Explanation
Lange (Reference Lange2013) argues that some explanations of natural phenomena are distinctively mathematical. Such explanations work by showing that the explanandum is mathematically (vs. causally or nomically) necessary. Dad cannot evenly distribute 13 cookies between two kids because 13 is indivisible by 2. The sandpile’s mass has to be 1,000 g (in a Newtonian world) because it is made of 100,000 grains of 0.01 g each. In each case, the explanandum follows with mathematical necessity in the empirical conditions.
Network models might also provide distinctively mathematical explanations. Take the bridges of Königsberg: seven bridges connect four landmasses; nobody can walk a path crossing each bridge exactly once (an Eulerian path). An Eulerian path requires that either zero or two landmasses have an odd number of bridges. In Königsberg, all four landmasses have an odd number of bridges. So it’s mathematically impossible to take an Eulerian walk around town.
Perhaps some facts about the brain have distinctively mathematical explanations. For example, a network’s mean path length is more robust to the random deletion of nodes in small-world networks than it is in regular or random networks. This might explain why brain function is robust against random cell death (see Behrens and Sporns Reference Behrens and Sporns2011). Signal propagation is faster in small-world networks than in random networks; oscillators coupled in small-world networks readily synchronize (Watts and Strogatz Reference Watts and Strogatz1998). Perhaps these are mathematical facts, and perhaps they carry explanatory weight.
Huneman (Reference Huneman2010) represents mathematical (specifically topological) explanations as arguments. They contain an empirical premise, asserting a topological (or network) property of a system, and a mathematical premise, stating a necessary relation. In our example of Königsberg’s bridges:
Empirical Premise. Königsberg’s bridges form a connected network with four nodes. Three nodes have three edges; one has five.
Mathematical Premise. Among connected networks composed of four nodes, only networks containing zero or two nodes with odd degree contain Eulerian paths.
Conclusion. There is no Eulerian path around the bridges of Königsberg.
And in our example for the brain:
EP. System S is a small-world network.…
MP. Small-world networks are more robust to random attack than are random or regular networks.Footnote 1
C. System S is more robust to random attack than random or regular networks.
On this reconstruction, distinctively mathematical explanations are like covering law explanations (Hempel Reference Hempel1965) except the “law statements” are mathematically necessary (Lange Reference Lange2013).
The covering law model struggled famously with the asymmetry of causal explanations. The flagpole’s height and the sun’s elevation explain the shadow’s length, and not vice versa. Scientific explanations have a directionality trigonometry lacks.
Similar cases arise for distinctively mathematical explanations. Because each kid got the same number of cookies, it follows necessarily that Dad started with an even batch, but the kids’ cookie count doesn’t explain the batch’s evenness. From the mass of the Newtonian sandpile and the number of identical grains, the mass of each grain follows necessarily, but the mass of the whole does not explain the masses of its parts. Examples also arise for network explanations:
EP. Königsberg’s bridges form a connected network with four nodes. Marta walks an Eulerian path through town.
MP. Among connected networks composed of four nodes, only networks containing zero or two nodes with odd degree also contain an Eulerian path.
C. Therefore, either zero or two of Königsberg’s landmasses have an odd number of bridges.
Yet Marta’s walk doesn’t explain Königsberg’s layout.Footnote 2
Huneman’s and Lange’s models are thus incomplete by their own lights; legitimate and illegitimate explanations fit the form (see Lange Reference Lange2013, 486). The accounts are thus incomplete as descriptions of the defining norms that sort good mathematical explanations from bad.
Furthermore, ontic commitments appear to readily account for this directionality. The evenness of the batch explains the equal distribution (and not vice versa) because the distribution is drawn from the batch. Properties of parts explain aggregate properties (and not vice versa) because the parts compose the whole. Network properties are explained in terms of nodes and edges (and not vice versa) because the nodes and edges compose and are organized into networks. Paradigm distinctively mathematical explanations thus arguably rely for their explanatory force on ontic commitments that determine the explanatory priority of causes to effects and parts to wholes.
4. Network Models in Mechanistic Explanations
To explore this ontic basis further, consider three uses of network models— to describe structural connectivity, causal connectivity, and functional connectivity. Models of structural and causal connectivity are sometimes used to represent causally relevant features of a mechanism (Levy and Bechtel Reference Levy and Bechtel2013; Zednik Reference Zednik2014). When they do, they explain just like any other causal or constitutive explanation (Woodward Reference Woodward2003; Craver Reference Craver2007). Network models of functional connectivity (secs. 5 and 6) are not designed to represent explanatory information as such.
4.1. Structural Connectivity
Network models sometimes represent a mechanism’s spatial organization. For example, researchers have mapped the structural connectivity (the cellular connectome) of the central nervous system of C. elegans, all 279 cells and 2,287 connections (see White et al. Reference White, Eileen, Thomson and Brenner1986; Achacosa and Yamamoto Reference Achacoso and Yamamoto1991; see also www.wormatlas.org). The nodes here are neurons; the edges are synapses. Algorithms applied to this network reveal a “rich club” structure in which high-degree hubs link to one another more than they do to nodes of lower degree (see, e.g., Towlson et al. Reference Towlson, Vèrtes, Ahnert, Schafer and Bullmore2013). Another example: Sebastian Seung is using automated cell-mapping software and crowdsourcing to map the connectome of a single mouse retina (see www.eyewire.org). The goal of these projects is to map every cell and connection in the brain (Seung Reference Seung2012). Network analysis supplies basic concepts (e.g., rich club) for discovering and describing organization in such bewilderingly complex structures.
These structural models contain a wealth of explanatorily relevant information about how brains work. But for any given explanandum (e.g., locomotion), that information is submerged in a sea of explanatorily irrelevant information. Brute-force models of this sort describe all the details and do not sort out which connections are explanatorily relevant to which phenomena.
Suppose, then, we fix an explanandum and filter the model for constitutive explanatory relevance. The resulting model would describe the anatomical connections relevant to some higher-level explanandum phenomenon. But it would only describe structures and would leave out most of how neural systems work. Cells generate temporal patterns of action potentials and transmitter release. Dendrites actively process information. Synapses can be active or quiescent, vary in strengths, and change over time. Two brains could have identical structural connectomes and work differently; after all, dead brains share structural connectomes with their living predecessors. One could model those anatomical connections without understanding the physiology of the system; anatomy is just one aspect of its organization.
4.2. Causal Connectivity
Directed graphs are also used to represent causal organization, how the parts (or features) in a mechanism interact (e.g., Spirtes, Glymour, and Scheines Reference Spirtes, Glymour and Scheines2000). Bechtel and Levy emphasize the importance of causal motifs, abstract patterns in a network’s causal organization: autoregulation, negative feedback, coherent feed-forward loops, and so on. Such motifs have been studied in gene regulatory networks (e.g., Alon Reference Alon2007) and in the brain (Sporns and Kötter Reference Sporns and Kötter2004). Motifs can be concatenated to form more complex networks and to construct predictive mathematical models of their behavior.
Causal motifs are mechanism schemas containing placeholders for relevant features of the mechanisms and arrows showing how they interact. The causal (or active) organization of a mechanism can be explanatorily relevant to the mechanism’s behavior independently of how the motif is instantiated. One can intervene to alter the details without changing the mechanism’s behavior so long as the abstract causal structure is still preserved through the intervention (Woodward Reference Woodward2003; Craver Reference Craver2007, chap. 6). If so, one can justifiably say that the causal organization, rather than the gory details, is the relevant difference maker for that explanandum.
Like structural networks, abstract network motifs often contain only the thinnest relevant information. For example, Alon’s type 1 coherent feed-forward network is shown in figure 1. The arrows stand for “activation.” Suppose X is a regulator of promoter Y for gene Z (adding considerable content to the motif). This motif clearly contains information explanatorily relevant to Z’s expression. And if we constrain the explanandum phenomenon sufficiently, the motif might describe the most relevant features of the mechanism. For different explananda (Craver Reference Craver2007, chap. 6) or in different pragmatic contexts (Weisberg Reference Weisberg2013), one’s explanation will require different degrees of abstraction.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220105124804593-0812:S0031824800009144:S0031824800009144_fg1.png?pub-status=live)
Figure 1. Type 1 coherent feed-forward network.
That said, network models of structural and causal connectivity are mechanistic in the sense that they derive their explanatory force from the fact that they explain in virtue of representing how the phenomenon is situated in the causal structure of the world. This general conclusion is supported by the fact that network models can be used to describe things that are not, and are not intended to be, explanations of anything at all. Just as causal-mechanical theories of etiological explanation solved many of the problems confronting Hempel’s covering law model by adding a set of ontic constraints that sort good explanations from bad, a causal-mechanical theory of network explanation clarifies (at least in many cases) why some network models are explanatory and others are not.
5. Nonexplanatory Evidential Networks
Consider the use of network models to analyze resting-state functional connectivity (RSFC) in the brain (Power et al. Reference Power2011; Wig, Schlaggar, and Petersen Reference Wig, Schlaggar and Petersen2011; Smith et al. Reference Smith2013). This work mines magnetic resonance imaging (MRI) data for information about how brain regions are organized into large-scale systems. It offers unprecedented scientific access to brain organization at the level of systems rather than the level of cells (Churchland and Sejnowski Reference Churchland and Sejnowski1989). The term “functional connectivity,” however, misleadingly suggests that FC models represent the brain’s working connections as such. They do not (see also Vincent et al. Reference Vincent, Patel, Fox, Snyder, Baker, Van Essen, Zempel, Snyder, Corbetta and Raichle2007; Buckner, Krienen, and Yeo Reference Buckner, Krienen and Yeo2013; Li et al. Reference Li, Bentley, Snyder, Raichle and Snyder2015).
The ground-level data of this project are measures of the blood-oxygen-level dependent (BOLD) signal in each (3 mm3) voxel of the brain. The term “resting state” indicates that these measures are obtained while the subject is (hopefully) motionless in the scanner and not performing an assigned task.Footnote 3 The scanner records the raw time course of the BOLD signal for each voxel. This noisy signal is then filtered to focus on BOLD fluctuations oscillating between 0.1 and 0.01 Hz.Footnote 4 Neuroscientists use this range because it generates the highest-powered signal, not because it is functionally significant. It is likely too slow to be relevant to how the brain works during behavioral tasks.
To model these data as a network, you start by defining the nodes. Power et al. (Reference Power2011) use two methods. The results mostly agree with each other. Voxelwise analysis treats each voxel of the brain as a node location, yielding at least 10,000 nodes per brain (Power et al. used 44,100 nodes). Areal analysis, a less arbitrary approach, uses meta-analyses of task-based functional MRI (fMRI) studies, combined with FC-mapping data described below, to identify 20 mm spheres in regions throughout the brain. This approach yields 264 node locations per brain. Neither approach starts with working brain parts. Voxelwise analysis dices the brain into uniformly sized cubes. Areal analysis reduces it to 264 spheres. In fact, the nodes are not even locations; rather, the nodes are time courses of low-frequency BOLD fluctuations in those locations.
The edges represent Pearson correlations between these time courses. The strength (or width) of an edge reflects the strength of the correlation. This is all represented in a matrix (10,000 × 10,000 or 264 × 264) showing the correlation between the BOLD time course in each voxel (or area) and that in every other voxel (or area). This matrix can then be analyzed with network tools, as described below.
It is now clear why the term “functional connectivity” is misleading. The nodes in the network do not (and need not) stand for working parts. They stand for time courses in the BOLD signal, which are merely indicators of brain activity. These time courses are (presumably) too slow to be part of how the brain performs cognitive tasks, they are present when the brain is not involved in any specific task, and they are measured in conveniently measurable units of brain tissue rather than known functional parts. Likewise, the edges do not necessarily represent anatomical connections, causal connections, or communications. There are, for example, strong functional connections between the right and left visual cortices despite the fact that there is no direct anatomical connection between them (Vincent et al. Reference Vincent, Patel, Fox, Snyder, Baker, Van Essen, Zempel, Snyder, Corbetta and Raichle2007). These correlations in slow-wave oscillations in blood oxygenation, in short, underdetermine the causal and anatomical structures that presumably produce them (Behrens and Sporns Reference Behrens and Sporns2011).
We have here a complex analog of the barometer and the storm: a correlational model that provides evidence about explanatory structures in the brain but that is not used to (and would not) explain how brains work. FC matrices are network models. They provide evidence about community structure in the brain. Community structure is relevant to brain function. But the matrices do not explain brain function. They don’t model the right kinds of stuff: the nodes aren’t working parts, and the edges are only correlations. As for the barometer and the storm, A is evidence for B, and B explains C, but A does not explain C. In my view, network analysis is interesting to the philosopher not primarily because it offers nonmechanistic explanations but because of the role it might play in discovering and describing complex mechanisms. Consider some examples.
5.1. System Identification
Like structural networks, FC networks can be analyzed with community detection tools to find clusters of nodes that are more tightly correlated with one another than with nodes outside the cluster. The assumption that clustered nodes in a correlational network form a causally functional unit can be given a quasi-empirical justification: things that fire together wire together, things that wire together synchronize their activities, and this synchronicity is reflected in temporal patterns in the BOLD signal (see Wig et al. Reference Wig, Schlaggar and Petersen2011, 141).
Using these methods, researchers have discovered several large-scale systems in the cortex, some of which correspond to traditional functional divisions (Power et al. Reference Power2011). For example, classical visual and auditory areas show up as clusters. Yet there are also surprises: for example, the classic separation of somatosensory from motor cortices is replaced by an orthogonal separation of hand representations from mouth representations.
5.2. Brain Parcellation
Changes in functional connectivity can be used to map cortical boundaries (e.g., Cohen et al. Reference Cohen, Fair, Dosenbach, Miezin, Dierker, Van Essen, Schlaggar and Petersen2008; Wig et al. Reference Wig, Laumann, Cohen, Power, Nelson, Glasser, Miezin, Snyder, Schlaggar and Petersen2014; Gordon et al. Reference Gordon, Lauman, Adeyemo, Huckins, Kelley and Petersen2016). An abrupt change in the correlational profile from one voxel to the next indicates that one has moved from one cortical area to another. This approach identifies some familiar boundaries of traditional anatomy but can also be extended to areas, such as the angular gyrus and the supramarginal gyrus, for which anatomical boundaries and parcellations are not currently settled (see, e.g., Nelson et al. Reference Nelson2010).
5.3. Comparing Brains
Differences in functional connectivity might be associated with neurological disorders and insults, such as Alzheimer’s disease (Greicius et al. Reference Greicius, Srivastava, Reiss and Minon2004), schizophrenia (Bassett et al. Reference Bassett, Bullmore, Verchinski, Mattay, Weinberger and Meyer-Lindenberg2008), multiple sclerosis (Lowe et al. Reference Lowe, Beall, Sakaie, Koenig, Stone, Marrie and Phillips2008), and Tourette syndrome (Church et al. Reference Church, Fair, Dosenbach, Cohen, Miezin, Petersen and Schlaggar2009). These differences might indicate detectable changes that are etiologically relevant or that are independently useful indicators for diagnosis or prognosis.
5.4. Lesion Analysis
Neuropsychologists of a strict localizationist bent prize “pure” cases of brain damage involving complete and surgical damage to single functional regions of cortex. “Impure cases” are notoriously hard to interpret because the symptoms combine and cannot be attributed with any certainty to the damage to specific loci. FC analysis, however, might offer an alternative perspective. Power et al. (Reference Power, Schlaggar, Lessov-Schlaggar and Petersen2013), for example, identify a number of local “target hubs” in the cortex that are closely connected to many functional systems and have high participation coefficients. Damage to these areas produces wide-ranging deficits out of proportion to the size of the lesion (Warren et al. Reference Warren, Power, Bruss, Denburg, Waldron, Sun, Petersen and Tranel2014). The “impure cases” of classical neuropsychology might look more pure through the lens of network science.
To conclude, FC-network models are correlational networks. They are used to discover causal systems, not to represent how causal systems work. Whether a network model explains a given phenomenon depends on how that phenomenon is specified and on whether the nodes and edges represent the right kinds of things and relations. Philosophical debates about how models refer (and what they refer to) are central to understanding how some models have explanatory force (Giere Reference Giere2004; Frigg Reference Frigg2006).
6. Near Decomposability and Random Walks
Rathkopf (Reference Rathkopf2015) argues that network models are distinct from mechanistic explanations because mechanistic explanations apply only to nearly decomposable systems (see sec. 2) whereas network analysis applies to systems that are not nearly decomposable. In drawing a “hard line” between these, Rathkopf’s useful discussion both undersells the resources of network analysis and artificially restricts the domain of mechanistic explanation.
Network analysis is not restricted to nondecomposable systems. The primary goal of FC analysis is to reveal community structure in complex networks. The community detection algorithms applied to FC matrices are animated by Simon’s (Reference Simon1962) concept of near decomposability: a component in a nearly decomposable system is a collection of nodes more strongly connected to one another than with nodes outside that collection (see Haugeland Reference Haugeland1998). Consider random walk algorithms, such as InfoMap (Rosvall and Bergstrom Reference Rosvall and Bergstrom2008).Footnote 5 A “bot” starts at an arbitrary node and moves along edges. It tends to get “trapped” in nearly decomposable clusters precisely because there are more and stronger edges that keep it in the cluster than ones that afford escape. Escape routes are interfaces. The places it gets trapped are nearly decomposable communities. Such algorithms do not deliver binary results (decomposable/not decomposable); rather, they represent a full spectrum of organization. Nondecomposable systems (with no community structure) are rare, idealized special cases. Rathkopf’s hard line is a blur.
Rathkopf enforces this line by restricting the concept of mechanism. Each network he considers is a causal network, composed of nodes, such as people, and interactions, such as contagion. (Network models are useful in epidemiology because they model community structure). Even in so-called non-nearly decomposable causal networks, the base level of nodes and edges is still a set of causally organized parts: a mechanism. In characterizing that base level, we learn how network behavior is situated in the causal structure of the world. The line between mechanisms (organized causal interactions among parts) and nonmechanisms (e.g., correlations) is much harder than that between nearly decomposable and non-nearly decomposable mechanisms.
7. Conclusion
Network analysis is transforming many areas of science. It is attracting large numbers of scientists. It is generating new problems to solve and new techniques to solve them. It is revealing phenomena invisible and unthinkable with traditional perspectives. Yet it does not seem to fundamentally alter the norms of explanation. The problem of directionality and the puzzle of correlational networks signal that, at least in many cases, the explanatory power of network models derives from their ability to represent how phenomena are situated, etiologically and constitutively, in the causal and constitutive structures of our complex world.