Branigan & Pickering (B&P) rightly argue that we theoretical linguists should pay attention to the massive evidence for structural priming which they review, but their argument actually suggests an even more radical direction for linguistic theory. In a nutshell, structural priming shows that grammars are networks; and if that's true, then linguists should be developing network-based models of grammar and of sentence structure.
Take Bock's classic experiment with which B&P open their case. One passive sentence primes another – e.g., an experimental subject is more likely to produce a passive sentence describing lightning hitting a church tower after reading The referee was punched by one of the fans than in a neutral control situation. How does this influence work? For B&P, “the neural underpinnings of priming are not well understood,” but the standard explanation for priming (Reisberg Reference Reisberg2007, pp. 257–80) sees it as the effect of activation in a neural network spilling over from the intended target to network neighbours, thereby making the latter more accessible. In lexical priming, for example, reading nurse primes this word's network neighbours so that doctor becomes easier to retrieve than it would be otherwise. This explanation, however, makes sense only if knowledge is stored as a network of interconnected nodes; so the relevant units must be connected in a network, and if the units concerned are grammatical categories such as active and passive, these, too, must be part of a network.
This argument is familiar from the literature on connectionist models of processing and learning (Dell et al. Reference Dell, Chang and Griffin1999; Elman et al. Reference Elman, Bates, Johnson, Karmiloff-Smith, Parisi and Plunkett1996), but linguistic theories are pitched at a higher level of abstraction than the neurons that carry activation, so the two streams of research have hardly met. For B&P, as for most linguists, language consists of abstract units such as words, phrases, categories, and relations; so, if these are part of a network, this must be a symbolic network. On the other hand, the activation responsible for priming in this network is a property of neural networks, so it is reasonable to assume that language is a symbolic network supported by a neural network. In other words, language belongs to the mind, while activation belongs to the brain.
The network view of language is widely accepted in modern theories of the lexicon (Allan Reference Allan and Brown2006), with its multiple types of relation (meaning, realization, spelling, word class, and so on) and its many-to-many mappings. Structural priming shows that networks are just as relevant to syntax: A sentence's structure combines a network of patterns such as voice, tense, transitivity, and so on, each of which is sufficiently active to prime other examples of the same pattern. These patterns are the constraints of any constraint-based theory of syntax, including B&P's preferred linguistic model, Parallel Architecture. In short, a sentence's grammatical structure must be a rich network of interacting and active nodes.
Where does this leave phrase structure, however, which is taken for granted in virtually every modern theory of syntax (and, disappointingly, by B&P themselves)? Phrase structure is an extremely impoverished theory of the human mind that recognises only one possible mental relation: the part-whole relation between smaller and larger units. According to phrase structure, direct relations between individual words are not possible. For example, in the sentence Linguistic theories should work, the only possible relations are those shown in a tree such as the one above the words in Figure 1. For example, the word linguistic can be related to the phrase linguistic theories, but not to theories. Moreover, if phrase structure is right, phrases cannot intersect; so, if linguistic theories is part of the phrase linguistic theories should work, it cannot also be part of linguistic theories work. As we all know, however, both of these assumptions are really problematic: Words do relate directly to one another (e.g., for agreement and government), and complex relations such as raising (from work to should) do exist.
Figure 1. Phrase structure compared with network structure.
Suppose, however, that syntactic theory is actually a network, not a tree. In that case, words can relate directly to one another, and multiple links are also possible. One such analysis is shown by the labeled arrows below the words in the figure for Linguistic theories should work. The labelled dependencies from theories to linguistic and from should to theories are typical of the very ancient tradition of dependency analysis (Percival Reference Percival1990) and of more recent work in theoretical and descriptive linguistics (Tesnière Reference Tesnière1959; Reference Tesnière, Osborne and Kahane2015; Sgall et al. Reference Sgall, Hajicová and Panevova1986; Mel'čuk Reference Mel'čuk, Polguère and Mel'čuk2009) as well as computational linguistics (Kübler et al. Reference Kübler, McDonald and Nivre2009) and psycholinguistics (Futrell et al. Reference Futrell, Mahowald and Gibson2015; Gildea & Temperley Reference Gildea and Temperley2010; Jiang & Liu Reference Jiang and Liu2015; Ninio Reference Ninio2006). All this work builds on the simple idea that our minds are free to recognise relations between words – an idea espoused some time ago by one of B&P (Pickering & Barry Reference Pickering and Barry1991).
The network notion, however, takes us further than this, to the idea that such relations need not be formally equivalent to a tree. In the example, theories is the subject not only of should, but also of work – a pattern that goes well beyond the formal limits of trees. This example illustrates the enriched dependency structure of one particular modern theory of grammar, Word Grammar (Duran-Eppler Reference Duran-Eppler2011; Gisborne Reference Gisborne2010; Hudson Reference Hudson2007; Reference Hudson2010). In this theory, syntactic structure is so rich that it can even recognise mutual dependency in cases such as Who came?, in which who depends (as subject) on came and came depends (as complement) on who. Mutual dependency is absolutely impossible in any tree-based theory, but of course, it is commonplace in ordinary cognition (e.g., in social structures).
In conclusion, structural priming shows not only that a grammar is a network, but also that enriched dependency structure is more plausible than phrase structure as a model of mental syntax.
Branigan & Pickering (B&P) rightly argue that we theoretical linguists should pay attention to the massive evidence for structural priming which they review, but their argument actually suggests an even more radical direction for linguistic theory. In a nutshell, structural priming shows that grammars are networks; and if that's true, then linguists should be developing network-based models of grammar and of sentence structure.
Take Bock's classic experiment with which B&P open their case. One passive sentence primes another – e.g., an experimental subject is more likely to produce a passive sentence describing lightning hitting a church tower after reading The referee was punched by one of the fans than in a neutral control situation. How does this influence work? For B&P, “the neural underpinnings of priming are not well understood,” but the standard explanation for priming (Reisberg Reference Reisberg2007, pp. 257–80) sees it as the effect of activation in a neural network spilling over from the intended target to network neighbours, thereby making the latter more accessible. In lexical priming, for example, reading nurse primes this word's network neighbours so that doctor becomes easier to retrieve than it would be otherwise. This explanation, however, makes sense only if knowledge is stored as a network of interconnected nodes; so the relevant units must be connected in a network, and if the units concerned are grammatical categories such as active and passive, these, too, must be part of a network.
This argument is familiar from the literature on connectionist models of processing and learning (Dell et al. Reference Dell, Chang and Griffin1999; Elman et al. Reference Elman, Bates, Johnson, Karmiloff-Smith, Parisi and Plunkett1996), but linguistic theories are pitched at a higher level of abstraction than the neurons that carry activation, so the two streams of research have hardly met. For B&P, as for most linguists, language consists of abstract units such as words, phrases, categories, and relations; so, if these are part of a network, this must be a symbolic network. On the other hand, the activation responsible for priming in this network is a property of neural networks, so it is reasonable to assume that language is a symbolic network supported by a neural network. In other words, language belongs to the mind, while activation belongs to the brain.
The network view of language is widely accepted in modern theories of the lexicon (Allan Reference Allan and Brown2006), with its multiple types of relation (meaning, realization, spelling, word class, and so on) and its many-to-many mappings. Structural priming shows that networks are just as relevant to syntax: A sentence's structure combines a network of patterns such as voice, tense, transitivity, and so on, each of which is sufficiently active to prime other examples of the same pattern. These patterns are the constraints of any constraint-based theory of syntax, including B&P's preferred linguistic model, Parallel Architecture. In short, a sentence's grammatical structure must be a rich network of interacting and active nodes.
Where does this leave phrase structure, however, which is taken for granted in virtually every modern theory of syntax (and, disappointingly, by B&P themselves)? Phrase structure is an extremely impoverished theory of the human mind that recognises only one possible mental relation: the part-whole relation between smaller and larger units. According to phrase structure, direct relations between individual words are not possible. For example, in the sentence Linguistic theories should work, the only possible relations are those shown in a tree such as the one above the words in Figure 1. For example, the word linguistic can be related to the phrase linguistic theories, but not to theories. Moreover, if phrase structure is right, phrases cannot intersect; so, if linguistic theories is part of the phrase linguistic theories should work, it cannot also be part of linguistic theories work. As we all know, however, both of these assumptions are really problematic: Words do relate directly to one another (e.g., for agreement and government), and complex relations such as raising (from work to should) do exist.
Figure 1. Phrase structure compared with network structure.
Suppose, however, that syntactic theory is actually a network, not a tree. In that case, words can relate directly to one another, and multiple links are also possible. One such analysis is shown by the labeled arrows below the words in the figure for Linguistic theories should work. The labelled dependencies from theories to linguistic and from should to theories are typical of the very ancient tradition of dependency analysis (Percival Reference Percival1990) and of more recent work in theoretical and descriptive linguistics (Tesnière Reference Tesnière1959; Reference Tesnière, Osborne and Kahane2015; Sgall et al. Reference Sgall, Hajicová and Panevova1986; Mel'čuk Reference Mel'čuk, Polguère and Mel'čuk2009) as well as computational linguistics (Kübler et al. Reference Kübler, McDonald and Nivre2009) and psycholinguistics (Futrell et al. Reference Futrell, Mahowald and Gibson2015; Gildea & Temperley Reference Gildea and Temperley2010; Jiang & Liu Reference Jiang and Liu2015; Ninio Reference Ninio2006). All this work builds on the simple idea that our minds are free to recognise relations between words – an idea espoused some time ago by one of B&P (Pickering & Barry Reference Pickering and Barry1991).
The network notion, however, takes us further than this, to the idea that such relations need not be formally equivalent to a tree. In the example, theories is the subject not only of should, but also of work – a pattern that goes well beyond the formal limits of trees. This example illustrates the enriched dependency structure of one particular modern theory of grammar, Word Grammar (Duran-Eppler Reference Duran-Eppler2011; Gisborne Reference Gisborne2010; Hudson Reference Hudson2007; Reference Hudson2010). In this theory, syntactic structure is so rich that it can even recognise mutual dependency in cases such as Who came?, in which who depends (as subject) on came and came depends (as complement) on who. Mutual dependency is absolutely impossible in any tree-based theory, but of course, it is commonplace in ordinary cognition (e.g., in social structures).
In conclusion, structural priming shows not only that a grammar is a network, but also that enriched dependency structure is more plausible than phrase structure as a model of mental syntax.