Frost has sounded a timely wake-up call to reading researchers and other cognitive scientists who are wont to draw universal generalizations on the basis of data collected from a specific culture, language, or orthography. He then asserts that the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies. Among experimental psychologists, elucidation of the cognitive operations common to all readers, and, more generally, to human cognition, has always headed the agenda; “variant and idiosyncratic” (target article, Abstract, emphasis in original) factors are less important. But should invariance be our overriding concern? For biologically primary abilities such as depth perception or auditory localization that are acquired early, rapidly, and universally, invariance is unquestionably the rule; variability or individual differences is of lesser concern, often denigrated as the “noise” in the system. However, because learned skills such as reading and writing represent recent cultural innovations that are not part of humans' evolutionary heritage, variability rather than invariance is fundamental. Even in the field of spoken-language processing, which is widely regarded by reading researchers as biologically primary (in contrast to written-language processing), it has been argued that there are few, if any, language universals once we consider the full compass of spoken language variety (Evans & Levinson Reference Evans and Levinson2009; see also the discussion of WEIRD psychology in Henrich et al. Reference Henrich, Heine and Norenzayan2010–both in previous issues of BBS). If universals exist in reading–and this is a hypothesis, not an axiom–these are likely to be overshadowed by culture-specific, language-specific, and script-specific differences, as well as by massive inter-individual variance. As Evans and Levinson (Reference Evans and Levinson2009, p. 429) argue, “Linguistic diversity then becomes the crucial datum for cognitive science.”
Does every language get the orthography it deserves? Frost makes the strong claim that orthographies optimally represent speech and meaning, and that the evolution of writing systems is the culmination of a process of optimization. I suggest this note of finality and “optimality” is unwarranted. Every writing system, like spoken language, is a living, breathing organism that must adapt to the ever-changing needs of its users, their culture, and the technology of communication. Written language, like spoken language, ceases to change only when it dies. Frost's “optimalism” may be true of a few languages in societies with a long-standing literacy tradition, but is highly doubtful when it comes to the many developing societies which are relative newcomers to writing and literacy. For example, approximately one third of the world's languages are spoken in Africa (Bendor-Samuel Reference Bendor-Samuel, Daniels and Bright1996), yet only some 500 have a written form–the vast majority using a European Roman-based alphabetic orthography disseminated by missionaries who took it for granted that their own writing systems would be optimal for non-European languages. Indeed, many Western scholars still presume that European alphabets are inherently superior to non-alphabetic systems (see, e.g., Gelb Reference Gelb1952; Havelock Reference Havelock1982) But are they? The answer is we don't know yet, but as the following three illustrations suggest, Europe's “orthographic elitism,” or rather “alphabetism,” may be unfounded.
A study by Asfaha et al. (Reference Asfaha, Kurvers and Kroon2009) investigated reading acquisition in four African languages in Eritrea that use either alphasyllabic (consonant-vowel [CV]) Ge'ez (Tigrinya and Tigre) or alphabetic Roman-based scripts (Kunama and Saho). All four languages are said to share a simple syllabic structure, a rich morphology, and a common national curriculum. All scripts, furthermore, are highly regular in either phoneme correspondences or CV (fidel) correspondences. The teaching of alphabetic Saho script focuses on CV units, whereas alphabetic Kunama is taught phonemically. A sample of 385 first-grade children who learned to read the alphasyllabic Ge'ez by far outperformed children who learned the alphabetic scripts, in spite of the larger number of signs/graphemes. Moreover, the CV-level teaching of alphabetic Saho produced superior results compared to alphabetic teaching of Kunama.
A second case in point comes from the Philippines, where the arrival of the Spanish in the 16th century lead to the marginalization of the indigenous Indic (alphasyllabic) scripts in all but the most remote regions. Kuipers and McDermott (Reference Kuipers, McDermott, Daniels and Bright1996) cite reports of unusually high literacy rates among the Hanunoo in the mountains of Mindoro.
A third example is from Southern Sudan, where the Dinka language is written in a European alphabetic orthography, which, according to some observers (John Myhill, personal communication, 2011), is almost impossible to read fluently. Myhill suggests this may be due to complex interactions between linguistic features not found in European languages, including voice quality and tone that can be both lexical and grammatical.
These few illustrations may not be isolated exceptions. There are documented cases of indigenous scripts invented ex nihilo by illiterate individuals aware only of the existence of writing systems among neighboring peoples or missionaries. Daniels (Reference Daniels, Daniels and Bright1996a) cites numerous examples (including the Cree and Vai syllabaries) and notes that almost all share a common design; signs for CV syllables alone (Daniels Reference Daniels, Daniels and Bright1996a; see also Chen [Reference Chen2011], on Chinese).
A final comment relates to Frost's argument that an efficient writing system must represent sound and meaning. I have termed these two dimensions of orthography decipherability and automatizability. Orthographies can be regarded as dual-purpose devices serving the distinct needs of novices and experts (see Share Reference Share2008a). Because all words are initially unfamiliar, the reader needs a means of deciphering new letter strings independently (see Share [Reference Share1995; Reference Share and Kail2008b] for more detailed discussion). Here, phonology and decipherability are paramount. To attain fluent, automatized reading, on the other hand, the reader needs unique morpheme-specific letter configurations that can be “unitized” and automatized for instant access to word meaning. Here morpheme-level representation takes precedence. (It may be morpheme distinctiveness [know versus no] rather than morpheme constancy [know/acknowledge] that is crucial for rapid, silent reading.)
This “unfamiliar-to-familiar” or “novice-to-expert” dualism highlights the developmental transition (common to all human skill learning) from slow, deliberate, step-by-step, unskilled performance to rapid, automatized, one-step skilled processing. Without morpheme-level automatizability, the skill of reading might never have transformed modern cultures so profoundly (or at least those few with near-optimal writing systems).
Frost has sounded a timely wake-up call to reading researchers and other cognitive scientists who are wont to draw universal generalizations on the basis of data collected from a specific culture, language, or orthography. He then asserts that the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies. Among experimental psychologists, elucidation of the cognitive operations common to all readers, and, more generally, to human cognition, has always headed the agenda; “variant and idiosyncratic” (target article, Abstract, emphasis in original) factors are less important. But should invariance be our overriding concern? For biologically primary abilities such as depth perception or auditory localization that are acquired early, rapidly, and universally, invariance is unquestionably the rule; variability or individual differences is of lesser concern, often denigrated as the “noise” in the system. However, because learned skills such as reading and writing represent recent cultural innovations that are not part of humans' evolutionary heritage, variability rather than invariance is fundamental. Even in the field of spoken-language processing, which is widely regarded by reading researchers as biologically primary (in contrast to written-language processing), it has been argued that there are few, if any, language universals once we consider the full compass of spoken language variety (Evans & Levinson Reference Evans and Levinson2009; see also the discussion of WEIRD psychology in Henrich et al. Reference Henrich, Heine and Norenzayan2010–both in previous issues of BBS). If universals exist in reading–and this is a hypothesis, not an axiom–these are likely to be overshadowed by culture-specific, language-specific, and script-specific differences, as well as by massive inter-individual variance. As Evans and Levinson (Reference Evans and Levinson2009, p. 429) argue, “Linguistic diversity then becomes the crucial datum for cognitive science.”
Does every language get the orthography it deserves? Frost makes the strong claim that orthographies optimally represent speech and meaning, and that the evolution of writing systems is the culmination of a process of optimization. I suggest this note of finality and “optimality” is unwarranted. Every writing system, like spoken language, is a living, breathing organism that must adapt to the ever-changing needs of its users, their culture, and the technology of communication. Written language, like spoken language, ceases to change only when it dies. Frost's “optimalism” may be true of a few languages in societies with a long-standing literacy tradition, but is highly doubtful when it comes to the many developing societies which are relative newcomers to writing and literacy. For example, approximately one third of the world's languages are spoken in Africa (Bendor-Samuel Reference Bendor-Samuel, Daniels and Bright1996), yet only some 500 have a written form–the vast majority using a European Roman-based alphabetic orthography disseminated by missionaries who took it for granted that their own writing systems would be optimal for non-European languages. Indeed, many Western scholars still presume that European alphabets are inherently superior to non-alphabetic systems (see, e.g., Gelb Reference Gelb1952; Havelock Reference Havelock1982) But are they? The answer is we don't know yet, but as the following three illustrations suggest, Europe's “orthographic elitism,” or rather “alphabetism,” may be unfounded.
A study by Asfaha et al. (Reference Asfaha, Kurvers and Kroon2009) investigated reading acquisition in four African languages in Eritrea that use either alphasyllabic (consonant-vowel [CV]) Ge'ez (Tigrinya and Tigre) or alphabetic Roman-based scripts (Kunama and Saho). All four languages are said to share a simple syllabic structure, a rich morphology, and a common national curriculum. All scripts, furthermore, are highly regular in either phoneme correspondences or CV (fidel) correspondences. The teaching of alphabetic Saho script focuses on CV units, whereas alphabetic Kunama is taught phonemically. A sample of 385 first-grade children who learned to read the alphasyllabic Ge'ez by far outperformed children who learned the alphabetic scripts, in spite of the larger number of signs/graphemes. Moreover, the CV-level teaching of alphabetic Saho produced superior results compared to alphabetic teaching of Kunama.
A second case in point comes from the Philippines, where the arrival of the Spanish in the 16th century lead to the marginalization of the indigenous Indic (alphasyllabic) scripts in all but the most remote regions. Kuipers and McDermott (Reference Kuipers, McDermott, Daniels and Bright1996) cite reports of unusually high literacy rates among the Hanunoo in the mountains of Mindoro.
A third example is from Southern Sudan, where the Dinka language is written in a European alphabetic orthography, which, according to some observers (John Myhill, personal communication, 2011), is almost impossible to read fluently. Myhill suggests this may be due to complex interactions between linguistic features not found in European languages, including voice quality and tone that can be both lexical and grammatical.
These few illustrations may not be isolated exceptions. There are documented cases of indigenous scripts invented ex nihilo by illiterate individuals aware only of the existence of writing systems among neighboring peoples or missionaries. Daniels (Reference Daniels, Daniels and Bright1996a) cites numerous examples (including the Cree and Vai syllabaries) and notes that almost all share a common design; signs for CV syllables alone (Daniels Reference Daniels, Daniels and Bright1996a; see also Chen [Reference Chen2011], on Chinese).
A final comment relates to Frost's argument that an efficient writing system must represent sound and meaning. I have termed these two dimensions of orthography decipherability and automatizability. Orthographies can be regarded as dual-purpose devices serving the distinct needs of novices and experts (see Share Reference Share2008a). Because all words are initially unfamiliar, the reader needs a means of deciphering new letter strings independently (see Share [Reference Share1995; Reference Share and Kail2008b] for more detailed discussion). Here, phonology and decipherability are paramount. To attain fluent, automatized reading, on the other hand, the reader needs unique morpheme-specific letter configurations that can be “unitized” and automatized for instant access to word meaning. Here morpheme-level representation takes precedence. (It may be morpheme distinctiveness [know versus no] rather than morpheme constancy [know/acknowledge] that is crucial for rapid, silent reading.)
This “unfamiliar-to-familiar” or “novice-to-expert” dualism highlights the developmental transition (common to all human skill learning) from slow, deliberate, step-by-step, unskilled performance to rapid, automatized, one-step skilled processing. Without morpheme-level automatizability, the skill of reading might never have transformed modern cultures so profoundly (or at least those few with near-optimal writing systems).