Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-11T01:29:16.005Z Has data issue: false hasContentIssue false

ADJUSTED VITERBI TRAINING

Published online by Cambridge University Press:  08 August 2007

Jüri Lember
Affiliation:
Tartu University Tartu 50409, Estonia E-mail: jyril@ut.ee
Alexey Koloydenko
Affiliation:
School of Mathematical Sciences University of Nottingham Nottingham, NG7 2RD, UK E-mail: alexey.koloydenko@maths.nottingham.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Viterbi training (VT) provides a fast but inconsistent estimator of hidden Markov models (HMM). The inconsistency is alleviated with a little extra computation when we enable VT to asymptotically fix the true values of the parameters. This relies on infinite Viterbi alignments and associated with them limiting probability distributions. First in a sequel, this article is a proof of concept; it focuses on mixture models, an important but special case of HMM where the limiting distributions can be calculated exactly. A simulated Gaussian mixture shows that our central algorithm (VA1) can significantly improve the accuracy of VT with little extra cost. Next in the sequel, we present elsewhere a theory of the adjusted VT for the general HMMs, where the limiting distributions are more challenging to find. Here, we also present another, more advanced correction to VT and verify its fast convergence and high accuracy; its computational feasibility requires additional investigation.

Type
Research Article
Copyright
2007 Cambridge University Press