Frost provides compelling arguments that efforts to understand orthographic representation and processing should not treat orthography as an isolated domain but must consider what orthography is for. That is, orthography is fundamentally a visual code for conveying meaning, and as such takes advantage of, and is thus shaped by, the phonological and morphological structure of a given language. Because the world's languages vary so much in these properties, their orthographies do as well, and in this sense “languages get the writing systems they deserve” (Seidenberg Reference Seidenberg, McCardle, Miller, Lee and Tzeng2011; cf. Frost's Note 2 in the target article). Thus, understanding orthography must be done in the context of a broader theory of the full reading system, encompassing orthography, phonology, morphology, semantics, and syntax.
While I fully concur with Frost on the nature of the problem, I disagree with him on the nature of the solution. Frost advocates the development of a universal theory of reading which “should focus on what is invariant in orthographic processing across writing systems” (sect. 9, para. 1, emphasis in the original). Harking back to Chomsky's (Reference Chomsky1965) notion of a Universal Grammar (despite its drawbacks; Sampson Reference Sampson2005), Frost anticipates that “the set of reading universals ought to be quite small, general, and abstract, to fit all writing systems and their significant inter-differences” (sect. 2.1, para. 1).
Therein lies the problem. …
Suppose a large group of researchers spent decades running studies and developing theories about how people play soccer. After all, most of the world plays soccer, it is a very socially important skill, and children are taught to play from a very young age. In fact, given that soccer is a relatively recent cultural invention and we are unlikely to have evolved dedicated brain mechanisms for it, someone might propose a “neural recycling hypothesis” (cf. Dehaene & Cohen Reference Dehaene and Cohen2007) about how brain regions that would otherwise be doing other things become recruited for soccer.
Now, of course, in some parts of the world, instead of soccer, people play baseball, or table tennis, or do cross-country skiing. Given that any given child might have been born into a completely different sporting environment, we can't have a theory of soccer playing that doesn't also explain how people play all of these other sports. Apparently what we need is a universal theory of sports that identifies and explains just those “fundamental and invariant phenomena” (target article, Abstract) that are true of all sports.
As it turns out, it is quite straightforward to formulate general principles that are important across sports, such as being fast, strong, coordinated, and fit (although each admits exceptions – e.g., strength is irrelevant in table tennis). Such generalities are no doubt relevant to understanding sporting activities. The problem is, of course, that by themselves they tell us almost nothing about how a given sport like soccer is actually played (and why someone like Lionel Messi is so much better at it than the rest of us). The real explanation comes from working out how the general principles manifest in, and interact with, the specifics of a particular sport to give rise to both its general and idiosyncratic aspects (e.g., the importance of eye–foot coordination). Put bluntly, the things that are universal are so general that they are the starting point for theory building, not the theory itself.
The same holds for reading. The relevant “universals” probably aren't about reading at all – they're more likely to be very general principles of neural representation, processing, and learning that interact with specific reading/linguistic environments to give rise to the observed range of behaviors (across both individuals and scripts). If so, there's no such thing as a universal theory of reading – instead, maybe there's something like a universal theory of neural computation that can, among other things, learn to read.
Frost is right to emphasize learning as fundamental to theories of reading (although why he would consider this a “new approach to modeling visual word recognition” [sect. 6, para, 7] is anyone's guess). However, one gets the sense that he hasn't quite embraced the depth of the implications of this commitment.
For example, although the general idea behind Frost's “universality constraint” is important, I disagree with his specific formulation. It's not that “models of reading should … aim to reflect the common cognitive operations involved in treating printed language across different writing systems” (sect. 2.1, para. 1, emphasis his). Rather, models should be grounded in computational principles that can apply to any writing system. The actual “cognitive operations” within the model are the result of complex interactions between these principles and learning in a particular linguistic environment, and would be expected to become tailored to that environment (and hence highly idiosyncratic).
Similarly, his treatment of the issue of sensitivity to letter position has the feel of a “principles and parameters” perspective (Chomsky Reference Chomsky1981) in which all that is required of learning is the binary determination of “whether or not to be flexible about letter-position coding” (sect. 4.2, para. 10). Frost suggests that this determination might be made on the basis of “whether the distribution of letter frequency is skewed or not” (sect. 5, para. 4) but admits that this doesn't explain why non-derived Hebrew words show transposed-letter priming but most (derived) Hebrew words do not (Velan & Frost Reference Velan and Frost2011). Without a fully developed learning theory, all that is left to fall back on is the implausible suggestion that subjects adapt their coding strategy on a word-by-word basis: “it is not the coding of letter position that is flexible, but the reader's strategy in processing them” (sect. 4.2, para. 9, italics in the original). Perhaps, instead, the system has simply learned to adapt its orthographic coding of each item in a way that takes into account its relationship with other items, in much the same way that networks can learn to treat exception words differently than orthographically similar regular words and nonwords (Plaut et al. Reference Plaut, McClelland, Seidenberg and Patterson1996).
In summary, while applauding the focus on learning-based theories of reading that generalize across languages, I would discourage the search for a universal theory that captures all and only those aspects of reading shared by all languages. Instead, it would be more fruitful to formulate general computational principles that combine with language-specific learning environments to yield the full complexity and diversity of reading-related phenomena observed across the world's languages.
Frost provides compelling arguments that efforts to understand orthographic representation and processing should not treat orthography as an isolated domain but must consider what orthography is for. That is, orthography is fundamentally a visual code for conveying meaning, and as such takes advantage of, and is thus shaped by, the phonological and morphological structure of a given language. Because the world's languages vary so much in these properties, their orthographies do as well, and in this sense “languages get the writing systems they deserve” (Seidenberg Reference Seidenberg, McCardle, Miller, Lee and Tzeng2011; cf. Frost's Note 2 in the target article). Thus, understanding orthography must be done in the context of a broader theory of the full reading system, encompassing orthography, phonology, morphology, semantics, and syntax.
While I fully concur with Frost on the nature of the problem, I disagree with him on the nature of the solution. Frost advocates the development of a universal theory of reading which “should focus on what is invariant in orthographic processing across writing systems” (sect. 9, para. 1, emphasis in the original). Harking back to Chomsky's (Reference Chomsky1965) notion of a Universal Grammar (despite its drawbacks; Sampson Reference Sampson2005), Frost anticipates that “the set of reading universals ought to be quite small, general, and abstract, to fit all writing systems and their significant inter-differences” (sect. 2.1, para. 1).
Therein lies the problem. …
Suppose a large group of researchers spent decades running studies and developing theories about how people play soccer. After all, most of the world plays soccer, it is a very socially important skill, and children are taught to play from a very young age. In fact, given that soccer is a relatively recent cultural invention and we are unlikely to have evolved dedicated brain mechanisms for it, someone might propose a “neural recycling hypothesis” (cf. Dehaene & Cohen Reference Dehaene and Cohen2007) about how brain regions that would otherwise be doing other things become recruited for soccer.
Now, of course, in some parts of the world, instead of soccer, people play baseball, or table tennis, or do cross-country skiing. Given that any given child might have been born into a completely different sporting environment, we can't have a theory of soccer playing that doesn't also explain how people play all of these other sports. Apparently what we need is a universal theory of sports that identifies and explains just those “fundamental and invariant phenomena” (target article, Abstract) that are true of all sports.
As it turns out, it is quite straightforward to formulate general principles that are important across sports, such as being fast, strong, coordinated, and fit (although each admits exceptions – e.g., strength is irrelevant in table tennis). Such generalities are no doubt relevant to understanding sporting activities. The problem is, of course, that by themselves they tell us almost nothing about how a given sport like soccer is actually played (and why someone like Lionel Messi is so much better at it than the rest of us). The real explanation comes from working out how the general principles manifest in, and interact with, the specifics of a particular sport to give rise to both its general and idiosyncratic aspects (e.g., the importance of eye–foot coordination). Put bluntly, the things that are universal are so general that they are the starting point for theory building, not the theory itself.
The same holds for reading. The relevant “universals” probably aren't about reading at all – they're more likely to be very general principles of neural representation, processing, and learning that interact with specific reading/linguistic environments to give rise to the observed range of behaviors (across both individuals and scripts). If so, there's no such thing as a universal theory of reading – instead, maybe there's something like a universal theory of neural computation that can, among other things, learn to read.
Frost is right to emphasize learning as fundamental to theories of reading (although why he would consider this a “new approach to modeling visual word recognition” [sect. 6, para, 7] is anyone's guess). However, one gets the sense that he hasn't quite embraced the depth of the implications of this commitment.
For example, although the general idea behind Frost's “universality constraint” is important, I disagree with his specific formulation. It's not that “models of reading should … aim to reflect the common cognitive operations involved in treating printed language across different writing systems” (sect. 2.1, para. 1, emphasis his). Rather, models should be grounded in computational principles that can apply to any writing system. The actual “cognitive operations” within the model are the result of complex interactions between these principles and learning in a particular linguistic environment, and would be expected to become tailored to that environment (and hence highly idiosyncratic).
Similarly, his treatment of the issue of sensitivity to letter position has the feel of a “principles and parameters” perspective (Chomsky Reference Chomsky1981) in which all that is required of learning is the binary determination of “whether or not to be flexible about letter-position coding” (sect. 4.2, para. 10). Frost suggests that this determination might be made on the basis of “whether the distribution of letter frequency is skewed or not” (sect. 5, para. 4) but admits that this doesn't explain why non-derived Hebrew words show transposed-letter priming but most (derived) Hebrew words do not (Velan & Frost Reference Velan and Frost2011). Without a fully developed learning theory, all that is left to fall back on is the implausible suggestion that subjects adapt their coding strategy on a word-by-word basis: “it is not the coding of letter position that is flexible, but the reader's strategy in processing them” (sect. 4.2, para. 9, italics in the original). Perhaps, instead, the system has simply learned to adapt its orthographic coding of each item in a way that takes into account its relationship with other items, in much the same way that networks can learn to treat exception words differently than orthographically similar regular words and nonwords (Plaut et al. Reference Plaut, McClelland, Seidenberg and Patterson1996).
In summary, while applauding the focus on learning-based theories of reading that generalize across languages, I would discourage the search for a universal theory that captures all and only those aspects of reading shared by all languages. Instead, it would be more fruitful to formulate general computational principles that combine with language-specific learning environments to yield the full complexity and diversity of reading-related phenomena observed across the world's languages.