Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-11T02:23:24.751Z Has data issue: false hasContentIssue false

Structural priming supports grammatical networks

Published online by Cambridge University Press:  10 November 2017

Richard Hudson*
Affiliation:
Linguistic Department, University College London, London WC1E 6BT. r.hudson@ucl.ac.ukwww.dickhudson.com

Abstract

As Branigan & Pickering (B&P) argue, structural priming has important implications for the theory of language structure, but these implications go beyond those suggested. Priming implies a network structure, so the grammar must be a network and so must sentence structure. Instead of phrase structure, the most promising model for syntactic structure is enriched dependency structure, as in Word Grammar.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Branigan & Pickering (B&P) rightly argue that we theoretical linguists should pay attention to the massive evidence for structural priming which they review, but their argument actually suggests an even more radical direction for linguistic theory. In a nutshell, structural priming shows that grammars are networks; and if that's true, then linguists should be developing network-based models of grammar and of sentence structure.

Take Bock's classic experiment with which B&P open their case. One passive sentence primes another – e.g., an experimental subject is more likely to produce a passive sentence describing lightning hitting a church tower after reading The referee was punched by one of the fans than in a neutral control situation. How does this influence work? For B&P, “the neural underpinnings of priming are not well understood,” but the standard explanation for priming (Reisberg Reference Reisberg2007, pp. 257–80) sees it as the effect of activation in a neural network spilling over from the intended target to network neighbours, thereby making the latter more accessible. In lexical priming, for example, reading nurse primes this word's network neighbours so that doctor becomes easier to retrieve than it would be otherwise. This explanation, however, makes sense only if knowledge is stored as a network of interconnected nodes; so the relevant units must be connected in a network, and if the units concerned are grammatical categories such as active and passive, these, too, must be part of a network.

This argument is familiar from the literature on connectionist models of processing and learning (Dell et al. Reference Dell, Chang and Griffin1999; Elman et al. Reference Elman, Bates, Johnson, Karmiloff-Smith, Parisi and Plunkett1996), but linguistic theories are pitched at a higher level of abstraction than the neurons that carry activation, so the two streams of research have hardly met. For B&P, as for most linguists, language consists of abstract units such as words, phrases, categories, and relations; so, if these are part of a network, this must be a symbolic network. On the other hand, the activation responsible for priming in this network is a property of neural networks, so it is reasonable to assume that language is a symbolic network supported by a neural network. In other words, language belongs to the mind, while activation belongs to the brain.

The network view of language is widely accepted in modern theories of the lexicon (Allan Reference Allan and Brown2006), with its multiple types of relation (meaning, realization, spelling, word class, and so on) and its many-to-many mappings. Structural priming shows that networks are just as relevant to syntax: A sentence's structure combines a network of patterns such as voice, tense, transitivity, and so on, each of which is sufficiently active to prime other examples of the same pattern. These patterns are the constraints of any constraint-based theory of syntax, including B&P's preferred linguistic model, Parallel Architecture. In short, a sentence's grammatical structure must be a rich network of interacting and active nodes.

Where does this leave phrase structure, however, which is taken for granted in virtually every modern theory of syntax (and, disappointingly, by B&P themselves)? Phrase structure is an extremely impoverished theory of the human mind that recognises only one possible mental relation: the part-whole relation between smaller and larger units. According to phrase structure, direct relations between individual words are not possible. For example, in the sentence Linguistic theories should work, the only possible relations are those shown in a tree such as the one above the words in Figure 1. For example, the word linguistic can be related to the phrase linguistic theories, but not to theories. Moreover, if phrase structure is right, phrases cannot intersect; so, if linguistic theories is part of the phrase linguistic theories should work, it cannot also be part of linguistic theories work. As we all know, however, both of these assumptions are really problematic: Words do relate directly to one another (e.g., for agreement and government), and complex relations such as raising (from work to should) do exist.

Figure 1. Phrase structure compared with network structure.

Suppose, however, that syntactic theory is actually a network, not a tree. In that case, words can relate directly to one another, and multiple links are also possible. One such analysis is shown by the labeled arrows below the words in the figure for Linguistic theories should work. The labelled dependencies from theories to linguistic and from should to theories are typical of the very ancient tradition of dependency analysis (Percival Reference Percival1990) and of more recent work in theoretical and descriptive linguistics (Tesnière Reference Tesnière1959; Reference Tesnière, Osborne and Kahane2015; Sgall et al. Reference Sgall, Hajicová and Panevova1986; Mel'čuk Reference Mel'čuk, Polguère and Mel'čuk2009) as well as computational linguistics (Kübler et al. Reference Kübler, McDonald and Nivre2009) and psycholinguistics (Futrell et al. Reference Futrell, Mahowald and Gibson2015; Gildea & Temperley Reference Gildea and Temperley2010; Jiang & Liu Reference Jiang and Liu2015; Ninio Reference Ninio2006). All this work builds on the simple idea that our minds are free to recognise relations between words – an idea espoused some time ago by one of B&P (Pickering & Barry Reference Pickering and Barry1991).

The network notion, however, takes us further than this, to the idea that such relations need not be formally equivalent to a tree. In the example, theories is the subject not only of should, but also of work – a pattern that goes well beyond the formal limits of trees. This example illustrates the enriched dependency structure of one particular modern theory of grammar, Word Grammar (Duran-Eppler Reference Duran-Eppler2011; Gisborne Reference Gisborne2010; Hudson Reference Hudson2007; Reference Hudson2010). In this theory, syntactic structure is so rich that it can even recognise mutual dependency in cases such as Who came?, in which who depends (as subject) on came and came depends (as complement) on who. Mutual dependency is absolutely impossible in any tree-based theory, but of course, it is commonplace in ordinary cognition (e.g., in social structures).

In conclusion, structural priming shows not only that a grammar is a network, but also that enriched dependency structure is more plausible than phrase structure as a model of mental syntax.

References

Allan, K. (2006) Lexicon: Structure. In: Encyclopedia of language and linguistics, second edition, ed. Brown, K., pp. 148–51. Elsevier.Google Scholar
Dell, G., Chang, F. & Griffin, Z. (1999) Connectionist models of language production: Lexical access and grammatical encoding. Cognitive Science 23(4):517–42.Google Scholar
Duran-Eppler, E. (2011) Emigranto. The syntax of German-English code-switching. Braumüller.Google Scholar
Elman, J., Bates, E., Johnson, M., Karmiloff-Smith, A., Parisi, D. & Plunkett, K. (1996) Rethinking innateness: A connectionist perspective on development. MIT Press.Google Scholar
Futrell, R., Mahowald, K. & Gibson, E. (2015) Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences of the United States of America 112(33):10336–41.Google Scholar
Gildea, D. & Temperley, D. (2010) Do grammars minimize dependency length? Cognitive Science 34:286310.Google Scholar
Gisborne, N. (2010) The event structure of perception verbs. Oxford University Press.CrossRefGoogle Scholar
Hudson, R. (2007) Language networks: The new word grammar. Oxford University Press.Google Scholar
Hudson, R. (2010) An introduction to word grammar. Cambridge University Press.CrossRefGoogle Scholar
Jiang, J. & Liu, H. (2015) The effects of sentence length on dependency distance, dependency direction and the implications – based on a parallel English-Chinese dependency treebank. Language Sciences 50:93104.Google Scholar
Kübler, S., McDonald, R. & Nivre, J. (2009) Dependency parsing. Synthesis Lectures on Human Language Technologies 2:1127.Google Scholar
Mel'čuk, I. (2009) Dependency in natural language. In: Dependency in linguistic description, ed. Polguère, A. & Mel'čuk, I., pp. 1110. John Benjamins.Google Scholar
Ninio, A. (2006) Language and the learning curve: A new theory of syntactic development. Oxford University Press.Google Scholar
Percival, K. (1990) Reflections on the history of dependency notions in linguistics. Historiographia Linguistica 17:2947.Google Scholar
Pickering, M. & Barry, G. (1991) Sentence processing without empty categories. Language and Cognitive Processes 6:229–59.Google Scholar
Reisberg, D. (2007) Cognition. Exploring the science of the mind, third media edition. Norton.Google Scholar
Sgall, P., Hajicová, E. & Panevova, J. (1986) The meaning of the sentence in its semantic and pragmatic aspects. Academia.Google Scholar
Tesnière, L. (1959) Éléments de syntaxe structurale. Klincksieck.Google Scholar
Tesnière, L. (2015) Elements of structural syntax, trans. Osborne, T. & Kahane, S.. Benjamins.CrossRefGoogle Scholar
Figure 0

Figure 1. Phrase structure compared with network structure.