Computer Science – Learning
Scientific paper
2008-11-26
Computer Science
Learning
Correction to sample complexity bounds; additional details in proofs of main theorems
Scientific paper
Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations--it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple: it employs only a singular value decomposition and matrix multiplications.
Hsu Daniel
Kakade Sham M.
Zhang Tong
No associations
LandOfFree
A Spectral Algorithm for Learning Hidden Markov Models does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with A Spectral Algorithm for Learning Hidden Markov Models, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and A Spectral Algorithm for Learning Hidden Markov Models will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-231606