Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T09:20:37.634Z Has data issue: false hasContentIssue false

Linguistic representations and memory architectures: The devil is in the details

Published online by Cambridge University Press:  02 June 2016

Dustin Alfonso Chacón
Affiliation:
Department of Linguistics, University of Maryland, College Park, MD 20742. dustin.alfonso@gmail.comshotam2@gmail.comcolin@umd.eduling.umd.edu/colin
Shota Momma
Affiliation:
Department of Linguistics, University of Maryland, College Park, MD 20742. dustin.alfonso@gmail.comshotam2@gmail.comcolin@umd.eduling.umd.edu/colin
Colin Phillips
Affiliation:
Department of Linguistics, University of Maryland, College Park, MD 20742. dustin.alfonso@gmail.comshotam2@gmail.comcolin@umd.eduling.umd.edu/colin

Abstract

Attempts to explain linguistic phenomena as consequences of memory constraints require detailed specification of linguistic representations and memory architectures alike. We discuss examples of supposed locality biases in language comprehension and production, and their link to memory constraints. Findings do not generally favor Christiansen & Chater's (C&C's) approach. We discuss connections to debates that stretch back to the nineteenth century.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2016 

It is important to understand how language is shaped by cognitive constraints, and limits on memory are natural culprits. In this regard, Christiansen & Chater (C&C) join a tradition in language research that has a long pedigree (Frazier & Fodor Reference Frazier and Fodor1978; Wundt Reference Wundt and Blumenthal1904) and to which we are sympathetic. C&C's model aims to integrate an impressive range of phenomena, but the authors play fast and loose with the details; they mischaracterize a number of phenomena; and key predictions depend on auxiliary assumptions that are independent of their model. An approach that takes the details of linguistic representations and memory architectures more seriously will ultimately be more fruitful. We illustrate using examples from comprehension and production.

C&C propose that comprehenders can maintain only a few low-level percepts at once and must therefore quickly encode higher-order, abstract representations. They argue that this explains the pervasive bias for shorter dependencies. However, memory representations are more than simple strings of words that quickly vanish. Sentences are encoded as richly articulated, connected representations that persist in memory, perhaps without explicit encoding of order, and memory access is similarly articulated (Lewis et al. Reference Lewis, Vasishth and Van Dyke2006). As evidence of their model, C&C cite agreement attraction in sentences like The key to the cabinets are on the table. These errors are common in production and often go unnoticed in comprehension, and it is tempting to describe them in terms of “proximity concord” (Quirk et al. Reference Quirk, Greenbaum, Leech and Svartvik1972). But this is inaccurate. Agreement attraction is widespread in cases where the distractor is further from the verb than the true subject, as in The musicians who the reviewer praise so highly will win (Bock & Miller Reference Bock and Miller1991). Attraction is asymmetrical, yielding “illusions of grammaticality” but not “illusions of ungrammaticality” (Wagers et al. Reference Wagers, Lau and Phillips2009), and depends on whether the distractor is syntactically “active” (Franck et al. Reference Franck, Soare, Frauenfelder and Rizzi2010). These facts are surprising if attraction reflects simple recency, but they can be captured in a model that combines articulated linguistic representations with a content-addressable memory architecture (Dillon et al. Reference Dillon, Mishler, Sloggett and Phillips2013; McElree et al. Reference McElree, Foraker and Dyer2003). Hence, agreement attraction fits C&C's broadest objective, deriving attraction from memory constraints, but only if suitably detailed commitments are made.

C&C also endorse the appealing view that locality constraints in syntax (“island effects”: Ross 1967) can be reduced to memory-driven locality biases in the processing of filler-gap dependencies (Kluender & Kutas Reference Kluender and Kutas1993). Details matter here, too, and they suggest a different conclusion. When linear and structural locality diverge, as in head-final languages such as Japanese, it becomes clear that the bias for shorter filler-gap dependencies in processing is linear, whereas grammatical locality constraints are structural (Aoshima et al. Reference Aoshima, Phillips and Weinberg2004; Chacón et al., Reference Chacón, Imtiaz, Dasgupta, Murshed, Dan and Phillipssubmitted; Omaki et al. Reference Omaki, Davidson-White, Goro, Lidz and Phillips2014).

The moral that we draw from these examples is that each reductionist claim about language must be evaluated on its own merits (Phillips Reference Phillips2013).

Turning to production, C&C argue that incrementality and locality biases reflect severe memory constraints, suggesting that we speak “into the void.” This amounts to what is sometimes called radical incrementality (Ferreira & Swets Reference Ferreira and Swets2002). It implies that sentence production involves word-by-word planning that is tightly synchronized with articulation – for example, planning is just-in-time, leading to a bias for local dependencies between words. However, this view of production does not reflect memory constraints alone, and it is empirically unwarranted.

Radical incrementality carries a strong representational assumption whose problems were pointed out in the late nineteenth century. The philologist Hermann Paul, an opponent of Wilhelm Wundt, argued that a sentence is essentially an associative sum of clearly segmentable concepts, each of which can trigger articulation in isolation. Radical incrementality requires this assumption, as it presupposes the isolability of each word or phrase in a sentence at all levels of representation. Memory constraints alone do not require this assumption, and so there is a gap in C&C's argument that memory constraints entail radical incrementality. Indeed, Wundt was already aware of memory limitations, and yet he adopted the contrasting view that sentence planning involves a successive scanning (apperception) of a sentence that is simultaneously present in the background of consciousness during speech (Wundt Reference Wundt and Blumenthal1904). The historical debate illustrates that radical incrementality turns on representational assumptions rather than directly following from memory limitations.

Empirically, radical incrementality has had limited success in accounting for production data. Three bodies of data that C&C cite turn out to not support their view. First, the scope of planning at higher levels (e.g., conceptual) can span a clause (Meyer Reference Meyer1996; Smith & Wheeldon Reference Smith and Wheeldon1999). Also, recent evidence suggests that linguistic dependencies can modulate the scope of planning (Lee et al. Reference Lee, Brown-Schmidt and Watson2013; Momma et al. Reference Momma, Slevc and Phillips2015, Reference Momma, Slevc and Phillipsin press). Second, since Wundt's time, availability effects on word order have not led researchers to assume radical incrementality (see Levelt Reference Levelt2012 for an accessible introduction to Wundt's views). Bock (Reference Bock and Kempen1987) emphasized that availability effects on order result from the tendency for accessible words to be assigned a higher grammatical function (e.g., subject). In languages where word order and the grammatical functional hierarchy dissociate, availability effects support the grammatical function explanation rather than radical incrementality (Christianson & Ferreira Reference Christianson and Ferreira2005). Third, contrary to C&C's claim, early observations about speech errors indicated that exchange errors readily cross phrasal and clausal boundaries (Garrett Reference Garrett and Butterworth1980).

C&C could argue that their view is compatible with many of these data; memory capacity at higher levels of representation is left as a free parameter. But this is precisely the limitation of their model: Specific predictions depend on specific commitments. Radical incrementality is certainly possible in some circumstances, but it is not required, and this is unexpected under C&C's view that speaking reduces to a chain of word productions that are constrained by severe memory limitations.

To conclude, we affirm the need to closely link language processes and cognitive constraints, and we suspect the rest of the field does too. However, the specifics of the memory system and linguistic representations are essential for an empirically informative theory, and they are often validated by the counterintuitive facts that they explain.

References

Aoshima, S., Phillips, C. & Weinberg, A. (2004) Processing filler-gap dependencies in a head-final language. Journal of Memory and Language 51:2354.CrossRefGoogle Scholar
Bock, K. (1987) Exploring levels of processing in sentence production. In: Natural language generation, ed. Kempen, G., pp. 351–63. Springer.CrossRefGoogle Scholar
Bock, K. & Miller, C. A. (1991) Broken agreement. Cognitive Psychology 23:4593.Google Scholar
Chacón, D., Imtiaz, M., Dasgupta, S., Murshed, S., Dan, M. & Phillips, C. (submitted) Locality in the processing of filler-gap dependencies in Bangla.Google Scholar
Christianson, K. & Ferreira, F. (2005) Conceptual accessibility and sentence production in a free word order language (Odawa). Cognition 98:105–35.CrossRefGoogle Scholar
Dillon, B., Mishler, A., Sloggett, S. & Phillips, C. (2013) Contrasting intrusion profiles for agreement and anaphora: Experimental and modeling evidence. Journal of Memory and Language 69:85103.CrossRefGoogle Scholar
Ferreira, F. & Swets, B. (2002) How incremental is language production? Evidence from the production of utterances requiring the computation of arithmetic sums. Journal of Memory and Language 46(1):5784.CrossRefGoogle Scholar
Franck, J., Soare, G., Frauenfelder, U. H. & Rizzi, L. (2010) Object interference: The role of intermediate traces of movement. Journal of Memory and Language 62:166–82.Google Scholar
Frazier, L. & Fodor, J. D. (1978) The sausage machine: A new two-stage parsing model. Cognition 6:291–25.Google Scholar
Garrett, M. F. (1980) Levels of processing in sentence production. In: Language production: Vol. 1. Speech and talk, ed. Butterworth, B.. pp. 177221. Academic Press.Google Scholar
Kluender, R. & Kutas, M. (1993) Subjacency as a processing phenomenon. Language & Cognitive Processes 8:573633.Google Scholar
Lee, E. K., Brown-Schmidt, S. & Watson, D. G. (2013) Ways of looking ahead: Hierarchical planning in language production. Cognition 129:544–62.CrossRefGoogle ScholarPubMed
Levelt, W. (2012) A history of psycholinguistics: The pre-Chomskyan era. Oxford University Press.CrossRefGoogle Scholar
Lewis, R. L., Vasishth, S. & Van Dyke, J. A. (2006) Computational principles of working memory in sentence comprehension. Trends in Cognitive Sciences 10:447–54.CrossRefGoogle ScholarPubMed
McElree, B., Foraker, S. & Dyer, L. (2003) Memory structures that subserve sentence comprehension. Journal of Memory and Language 48:6791.CrossRefGoogle Scholar
Meyer, A. S. (1996) Lexical access in phrase and sentence production: Results from picture-word interference experiments. Journal of Memory and Language 35:477–96.CrossRefGoogle Scholar
Momma, S., Slevc, L. R. & Phillips, C. (2015) The timing of verb planning in active and passive sentence production. Poster presented at the 28th annual CUNY Conference on Human Sentence Processing, Los Angeles, CA, March 19–21, 2015. Google Scholar
Momma, S., Slevc, L. R. & Phillips, C. (in press) The timing of verb planning in Japanese sentence production. Journal of Experimental Psychology: Learning, Memory, and Cognition.Google Scholar
Omaki, A., Davidson-White, I., Goro, T., Lidz, J. & Phillips, C. (2014) No fear of commitment: Children's incremental interpretation in English and Japanese. Language Learning and Development 10:206–33.CrossRefGoogle Scholar
Phillips, C. (2013) Some arguments and nonarguments for reductionist accounts of syntactic phenomena. Language and Cognitive Processes 28:156–87.CrossRefGoogle Scholar
Quirk, R., Greenbaum, S., Leech, G. N. & Svartvik, J. (1972) A grammar of contemporary English. Longman.Google Scholar
Smith, M. & Wheeldon, L. (1999) High level processing scope in spoken sentence production. Cognition 73:205–46.CrossRefGoogle ScholarPubMed
Wagers, M., Lau, E. & Phillips, C. (2009) Agreement attraction in comprehension: Representations and processes. Journal of Memory and Language 61:206–37.Google Scholar
Wundt, W. (1904) The psychology of the sentence. In: Language and psychology: Historical aspects of psycholinguistics, ed. Blumenthal, A. L., pp. 932. Wiley.Google Scholar