Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-05T21:44:48.910Z Has data issue: false hasContentIssue false

Data structures in speech production

Published online by Cambridge University Press:  25 October 2016

Mark Tatham
Affiliation:
University of Essex, Colchester, UKmark.tatham@btconnect.com
Katherine Morton
Affiliation:
University of Essex, Colchester, UKkatherine.morton@btconnect.com
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Computationally testable models in linguistics focus on declaring data structures and providing exemplar derivations. This paper outlines a comprehensive model of speech production which goes beyond derivations to show how actual instances of utterances can be formally characterised. Utterances contain a wealth of detail beyond the underlying utterance plan: some of this is a function of the mechanism itself (e.g. coarticulation) and some is the result of carefully supervised control. We develop the notion of managed or supervised speech production to enable the inclusion of EXPRESSIVE content in speech. Building on earlier work the Cognitive Phonetics Agent bridges the gap between the physical and cognitive processes in phonetics by controlling the way phonologically determined utterance plans are phonetically rendered in detail. The model is illustrated using different types of data structure which occur in speech, concentrating in particular on an XML characterisation of appropriate structures. We trace a simple utterance through from its phonological plan to a detailed intrinsic allophonic representation to show how stages in the model work.

Type
Research Article
Copyright
Journal of the International Phonetic Association 2003