Baldi, Pierre and Chauvin, Yves (1994) Smooth on-line learning algorithms for hidden Markov models. Neural Computation, 6 (2). pp. 307-318. ISSN 0899-7667 http://resolver.caltech.edu/CaltechAUTHORS:BALnc94
- Published Version
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:BALnc94
A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.
|Additional Information:||© 1994 Massachusetts Institute of Technology. Received January 8, 1993; accepted July 9, 1993. Posted Online April 10, 2008. We would like to thank David Haussler, Anders Krogh, Yosi Rinott, and Esther Levin for useful discussions. The work of P. B. is supported by grants from the AFOSR and the ONR.|
|Usage Policy:||No commercial reproduction, distribution, display or performance rights in this work are provided.|
|Deposited By:||Tony Diaz|
|Deposited On:||25 Jun 2009 16:43|
|Last Modified:||26 Dec 2012 10:53|
Repository Staff Only: item control page