|
|
Line 1: |
Line 1: |
| A '''hidden Markov model''' ('''HMM''') is a [[Statistical model|statistical]] [[Markov model]] in which the system being modeled is assumed to be a [[Markov process]] with unobserved (''hidden'') states. A HMM can be considered the simplest [[dynamic Bayesian network]]. The mathematics behind the HMM was developed by [[Leonard E. Baum|L. E. Baum]] and coworkers.<ref>{{cite journal|last=Baum|first=L. E.|coauthors=Petrie, T.|title=Statistical Inference for Probabilistic Functions of Finite State Markov Chains|journal=The Annals of Mathematical Statistics|year=1966|volume=37|issue=6|pages=1554–1563|url=http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?handle=euclid.aoms/1177699147&view=body&content-type=pdf_1|accessdate=28 November 2011|doi=10.1214/aoms/1177699147}}</ref><ref>{{cite doi|10.1090/S0002-9904-1967-11751-8}}</ref><ref>{{cite journal|last=Baum|first=L. E.|coauthors=Sell, G. R.|title=Growth transformations for functions on manifolds|journal=Pacific Journal of Mathematics|year=1968|volume=27|issue=2|pages=211–227|url=http://www.scribd.com/doc/6369908/Growth-Functions-for-Transformations-on-Manifolds|accessdate=28 November 2011}}</ref><ref>{{cite doi|10.1214/aoms/1177697196}}</ref><ref>{{cite journal|last=Baum|first=L.E.|title=An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of a Markov Process|journal=Inequalities|year=1972|volume=3|pages=1–8}}</ref> It is closely related to an earlier work on optimal nonlinear [[filtering problem (stochastic processes)]] by [[Ruslan L. Stratonovich]],<ref name=Stratonovich1960>{{cite journal|author=Stratonovich, R.L.|year=1960|title=Conditional Markov Processes|journal=Theory of Probability and its Applications|volume=5|pages=156–178}}</ref> who was the first to describe the [[Forward–backward algorithm|forward-backward procedure]].
| | These are simply a limited of many categories people are put into based on their fat. How do we learn where we fit? There are literally hundreds of charts to measure by, sow which you works best?<br><br>Basically, a [http://safedietplans.com/bmi-chart bmi chart] is a calculation that utilizes height plus fat to recognize how much body fat an individual has. It is a graphical representation of the body, where weight is constantly on the horizontal axis plus height on the vertical axis. Once we give this chart to a doctor, he can tell you the right solution to improve a wellness. You is grateful to find that now online tools have produced it much easier for we to calculate the BMI. Usually folks with thick fat have BMI between 25 and 29.9, while the underweight persons have BMI somewhere around 18.5. Once we are aware of the BMI, you are able to change your lifestyle into a healthy 1 thus because to confirm good wellness.<br><br>Then if the BMI is high found on the bmi chart men chart, above 25, the rule of thumb for water intake is to Drink Half The Body Weight In Water. Overweight persons tend to need more water, because fat cells hold more water than different fat cells inside the body, according to Barbara Levine, R.D., Ph.D., the Director of the Nutrition Information Center at the New York Hospital-Cornell Medical Center. If you fit this profile, like me, simply divide your fat by two to decide the amount of water you'll need to drink each day.<br><br>Naturally, children must, plus should, gain fat through the natural procedure of development, nevertheless various kids go beyond which and place on excess fatty tissue; i.e. they become fat. Obesity is quickly becoming a serious issue with todays youngsters, partially through the incorrect nutrition plus eating too much of the incorrect foods, and partially from ignorance on behalf of the parents who have a misconception which puppy fat is a healthy and regular thing.<br><br>Training Programs: A small planning goes a long technique. If possible, try to program your training to run more usually on softer surfaces like trails, dirt roads, grassy parks, or the track. A limited advantageous programs are on the resource page. There are many superior ones out there--find one that matches we.<br><br>Dont do anything extreme whenever you may be trying bmi chart women to take off weight. Eat sensibly, omitting 500 calories a day for a fat loss of 1 pound per week. Exercise, gradually adding more repetitions for much quicker fat loss. You will be doing a heart a favor.<br><br>Obesity is not a single problem. There are several effects of weight. One of the initially and foremost effects is inability to carry out day-to-day escapades. An overweight individual is often inactive and feels fatigued almost all of the time. Obesity gives birth to other wellness difficulties. Some obese people develop Pickwick syndrome in that a person feels drowsy and color of his face remains reddish. Pain in joints, bones, plus lower back part of the body is frequently experienced by over-weight people. Another chronic health issue that one could is susceptible to is diabetes that is incurable.<br><br>If your BMI is elevated you can wish To talk to your doctor regarding the risk factors associated with obesity plus whether or not you need to lose fat. Your doctor ought to be able to advise you and get you on track to beginning a weight loss routine. There are many dangers included with people who are overweight or obese including hypertension, excellent blood cholesterol or different lipid disorders, kind 2 diabetes, heart condition, stroke, plus certain cancers. |
| | |
| In simpler [[Markov model]]s (like a [[Markov chain]]), the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a ''hidden'' Markov model, the state is not directly visible, but output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Note that the adjective 'hidden' refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a 'hidden' Markov model even if these parameters are known exactly.
| |
| | |
| Hidden Markov models are especially known for their application in [[time|temporal]] pattern recognition such as [[speech recognition|speech]], [[handwriting recognition|handwriting]], [[gesture recognition]],<ref>Thad Starner, Alex Pentland. [http://www.cc.gatech.edu/~thad/p/031_10_SL/real-time-asl-recognition-from%20video-using-hmm-ISCV95.pdf Real-Time American Sign Language Visual Recognition From Video Using Hidden Markov Models]. Master's Thesis, MIT, Feb 1995, Program in Media Arts
| |
| </ref> [[part-of-speech tagging]], musical score following,<ref>B. Pardo and W. Birmingham. [http://www.cs.northwestern.edu/~pardo/publications/pardo-birmingham-aaai-05.pdf Modeling Form for On-line Following of Musical Performances]. AAAI-05 Proc., July 2005. | |
| </ref> [[partial discharge]]s<ref>Satish L, Gururaj BI (April 2003). "[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=212242 Use of hidden Markov models for partial discharge pattern classification]". ''IEEE Transactions on Dielectrics and Electrical Insulation''.
| |
| </ref> and [[bioinformatics]].
| |
| | |
| A hidden Markov model can be considered a generalization of a [[mixture model]] where the hidden variables (or [[latent variables]]), which control the mixture component to be selected for each observation, are related through a Markov process rather than independent of each other.
| |
| | |
| == Description in terms of urns ==
| |
| | |
| [[Image:HiddenMarkovModel.svg|right|thumb|300px|
| |
| Figure 1. Probabilistic parameters of a hidden Markov model (example)<br>
| |
| ''x'' — states<br>
| |
| ''y'' — possible observations<br>
| |
| ''a'' — state transition probabilities<br>
| |
| ''b'' — output probabilities]]
| |
| In its discrete form, a hidden Markov process can be visualized as a generalization of the [[Urn problem]] (where each item from the urn is returned to the original urn before the next step).<ref>{{cite journal |author=[[Lawrence Rabiner|Lawrence R. Rabiner]] |title=A tutorial on Hidden Markov Models and selected applications in speech recognition |journal=Proceedings of the [[IEEE]] |volume=77 |issue=2 |pages=257–286 |date=February 1989 |url=http://www.ece.ucsb.edu/Faculty/Rabiner/ece259/Reprints/tutorial%20on%20hmm%20and%20applications.pdf |doi=10.1109/5.18626}} [http://www.cs.cornell.edu/courses/cs481/2004fa/rabiner.pdf]
| |
| </ref> Consider this example, in a room that is not visible to an observer there is a genie. The room contains urns X1, X2, X3, ... each of which contains a known mix of balls, each ball labeled y1, y2, y3, ... . The genie chooses an urn in that room and randomly draws a ball from that urn. It then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns; the choice of the urn for the ''n''-th ball depends only upon a random number and the choice of the urn for the (''n'' − 1)-th ball. The choice of urn does not directly depend on the urns chosen before this single previous urn; therefore, this is called a [[Markov process]]. It can be described by the upper part of Figure 1.
| |
| | |
| The Markov process itself cannot be observed, and only the sequence of labeled balls can be observed, thus this arrangement is called a "hidden Markov process". This is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if the observer knows the composition of the urns and has just observed a sequence of three balls, ''e.g.'' y1, y2 and y3 on the conveyor belt, the observer still cannot be ''sure'' which urn (''i.e.'', at which state) the genie has drawn the third ball from. However, the observer can work out other information, such as the likelihood that the third ball came from each of the urns.
| |
| | |
| == Architecture ==
| |
| The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable ''x''(''t'') is the hidden state at time ''t'' (with the model from the above diagram, ''x''(''t'') ∈ { ''x''<sub>1</sub>, ''x''<sub>2</sub>, ''x''<sub>3</sub> }). The random variable ''y''(''t'') is the observation at time ''t'' (with ''y''(''t'') ∈ { ''y''<sub>1</sub>, ''y''<sub>2</sub>, ''y''<sub>3</sub>, ''y''<sub>4</sub> }). The arrows in the diagram (often called a [[Trellis (graph)|trellis diagram]]) denote conditional dependencies.
| |
| | |
| From the diagram, it is clear that the [[conditional probability distribution]] of the hidden variable ''x''(''t'') at time ''t'', given the values of the hidden variable ''x'' at all times, depends ''only'' on the value of the hidden variable ''x''(''t'' − 1): the values at time ''t'' − 2 and before have no influence. This is called the [[Markov property]]. Similarly, the value of the observed variable ''y''(''t'') only depends on the value of the hidden variable ''x''(''t'') (both at time ''t'').
| |
| | |
| In the standard type of hidden Markov model considered here, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a [[categorical distribution]]) or continuous (typically from a [[Gaussian distribution]]). The parameters of a hidden Markov model are of two types, ''transition probabilities'' and ''emission probabilities'' (also known as ''output probabilities''). The transition probabilities control the way the hidden state at time <math>t</math> is chosen given the hidden state at time <math>t-1</math>.
| |
| | |
| The hidden state space is assumed to consist of one of <math>N</math> possible values, modeled as a categorical distribution. (See the section below on extensions for other possibilities.) This means that for each of the <math>N</math> possible states that a hidden variable at time <math>t</math> can be in, there is a transition probability from this state to each of the <math>N</math> possible states of the hidden variable at time <math>t+1</math>, for a total of <math>N^2</math> transition probabilities. Note that the set of transition probabilities for transitions from any given state must sum to 1. Thus, the <math>N \times N</math> matrix of transition probabilities is a [[Stochastic matrix|Markov matrix]]. Because any one transition probability can be determined once the others are known, there are a total of <math>N(N-1)</math> transition parameters.
| |
| | |
| In addition, for each of the <math>N</math> possible states, there is a set of emission probabilities governing the distribution of the observed variable at a particular time given the state of the hidden variable at that time. The size of this set depends on the nature of the observed variable. For example, if the observed variable is discrete with <math>M</math> possible values, governed by a [[categorical distribution]], there will be <math>M-1</math> separate parameters, for a total of <math>N(M-1)</math> emission parameters over all hidden states. On the other hand, if the observed variable is an <math>M</math>-dimensional vector distributed according to an arbitrary [[multivariate Gaussian distribution]], there will be <math>M</math> parameters controlling the [[mean]]s and <math>M(M+1)/2</math> parameters controlling the [[covariance matrix]], for a total of <math>N(M + \frac{M(M+1)}{2}) = NM(M+3)/2 = O(NM^2)</math> emission parameters. (In such a case, unless the value of <math>M</math> is small, it may be more practical to restrict the nature of the covariances between individual elements of the observation vector, e.g. by assuming that the elements are independent of each other, or less restrictively, are independent of all but a fixed number of adjacent elements.)
| |
| | |
| [[Image:hmm temporal bayesian net.svg|500px|center|Temporal evolution of a hidden Markov model]]
| |
| | |
| ==Inference==
| |
| [[Image:HMMsequence.svg|thumb|400px|The state transition and output probabilities of an HMM are indicated by the line opacity in the upper part of the diagram. Given that we have observed the output sequence in the lower part of the diagram, we may be interested in the most likely sequence of states that could have produced it. Based on the arrows that are present in the diagram, the following state sequences are candidates:<br>
| |
| 5 3 2 5 3 2<br>
| |
| 4 3 2 5 3 2<br>
| |
| 3 1 2 5 3 2<br>
| |
| We can find the most likely sequence by evaluating the joint probability of both the state sequence and the observations for each case (simply by multiplying the probability values, which here correspond to the opacities of the arrows involved). In general, this type of problem (i.e. finding the most likely explanation for an observation sequence) can be solved efficiently using the [[Viterbi algorithm]].]]
| |
| | |
| Several [[inference]] problems are associated with hidden Markov models, as outlined below.
| |
| | |
| === Probability of an observed sequence ===
| |
| The task is to compute, given the parameters of the model, the probability of a particular output sequence. This requires summation over all possible state sequences:
| |
| | |
| The probability of observing a sequence
| |
| : <math>Y=y(0), y(1),\dots,y(L-1)\,</math>
| |
| of length ''L'' is given by
| |
| :<math>P(Y)=\sum_{X}P(Y\mid X)P(X),\,</math>
| |
| where the sum runs over all possible hidden-node sequences
| |
| : <math>X=x(0), x(1), \dots, x(L-1).\,</math>
| |
| | |
| Applying the principle of [[dynamic programming]], this problem, too, can be handled efficiently using the [[forward algorithm]].
| |
| | |
| === Probability of the latent variables ===
| |
| A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations <math>y(1),\dots,y(t).</math>
| |
| | |
| ==== Filtering ====
| |
| The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute <math>P(x(t)\ |\ y(1),\dots,y(t))</math>. This task is normally used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points of time, with corresponding observations at each point in time. Then, it is natural to ask about the state of the process at the end.
| |
| | |
| This problem can be handled efficiently using the [[forward algorithm]].
| |
| | |
| ==== Smoothing ====
| |
| This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute <math>P(x(k)\ |\ y(1), \dots, y(t))</math> for some <math>k < t</math>. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time ''k'' in the past, relative to time ''t''.
| |
| | |
| The [[forward-backward algorithm]] is an efficient method for computing the smoothed values for all hidden state variables.
| |
| | |
| ==== Most likely explanation ====
| |
| The task, unlike the previous two, asks about the [[joint probability]] of the ''entire'' sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is [[part-of-speech tagging]], where the hidden states represent the underlying [[part of speech|parts of speech]] corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute.
| |
| | |
| This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the [[Viterbi algorithm]].
| |
| | |
| ===Statistical significance===
| |
| For some of the above problems, it may also be interesting to ask about [[statistical significance]]. What is the probability that a sequence drawn from some [[null distribution]] will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence?<ref>{{cite pmid|19589158}}</ref> When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the [[false positive rate]] associated with failing to reject the hypothesis for the output sequence.
| |
| | |
| == A concrete example ==
| |
| {{HMM example}}
| |
| | |
| ''A similar example is further elaborated in the [[Viterbi algorithm#Example|Viterbi algorithm]] page.''
| |
| | |
| ==Learning==
| |
| | |
| The parameter learning task in HMMs is to find, given an output sequence or a set of such sequences, the best set of state transition and output probabilities. The task is usually to derive the [[maximum likelihood]] estimate of the parameters of the HMM given the set of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the [[Baum–Welch algorithm]] or the Baldi–Chauvin algorithm. The [[Baum–Welch algorithm]] is a special case of the [[expectation-maximization algorithm]].
| |
| | |
| ==Mathematical description==
| |
| | |
| ===General description===
| |
| A basic, non-Bayesian hidden Markov model can be described as follows:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of states
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}</math>|| ||<math>=</math>|| ||emission parameter for an observation associated with state <math>i</math>
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N, j=1 \dots N}</math>|| ||<math>=</math>|| ||probability of transition from state <math>i</math> to state <math>j</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{i,1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||state of observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>F(y|\theta)</math>|| ||<math>=</math>|| ||probability distribution of an observation, parametrized on <math>\theta</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>F(\theta_{x_t})</math>
| |
| |}
| |
| | |
| Note that, in the above model (and also the one below), the prior distribution of the initial state <math>x_1</math> is not specified. Typical learning models correspond to assuming a discrete uniform distribution over possible states (i.e. no particular prior distribution is assumed).
| |
| | |
| In a Bayesian setting, all parameters are associated with random variables, as follows:
| |
| | |
| {|
| |
| | || ||<math>N,T</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}, \phi_{i=1 \dots N, j=1 \dots N}, \boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}, y_{t=1 \dots T}, F(y|\theta)</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>\alpha</math>|| ||<math>=</math>|| ||shared hyperparameter for emission parameters
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||shared hyperparameter for transition parameters
| |
| |-
| |
| | || ||<math>H(\theta|\alpha)</math>|| ||<math>=</math>|| ||prior probability distribution of emission parameters, parametrized on <math>\alpha</math>
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>H(\alpha)</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\beta)</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>F(\theta_{x_t})</math>
| |
| |}
| |
| | |
| These characterizations use <math>F</math> and <math>H</math> to describe arbitrary distributions over observations and parameters, respectively. Typically <math>H</math> will be the [[conjugate prior]] of <math>F</math>. The two most common choices of <math>F</math> are [[Gaussian distribution|Gaussian]] and [[categorical distribution|categorical]]; see below.
| |
| | |
| ===Compared with a simple mixture model===
| |
| As mentioned above, the distribution of each observation in a hidden Markov model is a [[mixture density]], with the states of the corresponding to mixture components. It is useful to compare the above characterizations for an HMM with the corresponding characterizations, of a [[mixture model]], using the same notation.
| |
| | |
| A non-Bayesian mixture model:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of mixture components
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}</math>|| ||<math>=</math>|| ||parameter of distribution of observation associated with component <math>i</math>
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||mixture weight, i.e. prior probability of component <math>i</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||component of observation <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation <math>t</math>
| |
| |-
| |
| | || ||<math>F(y|\theta)</math>|| ||<math>=</math>|| ||probability distribution of an observation, parametrized on <math>\theta</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi)</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>F(\theta_{x_t})</math>
| |
| |}
| |
| | |
| A Bayesian mixture model:
| |
| | |
| {|
| |
| | || ||<math>N,T</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}, \phi_{i=1 \dots N}, \boldsymbol\phi</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}, y_{t=1 \dots T}, F(y|\theta)</math>|| ||<math>=</math>|| ||as above
| |
| |-
| |
| | || ||<math>\alpha</math>|| ||<math>=</math>|| ||shared hyperparameter for component parameters
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||shared hyperparameter for mixture weights
| |
| |-
| |
| | || ||<math>H(\theta|\alpha)</math>|| ||<math>=</math>|| ||prior probability distribution of component parameters, parametrized on <math>\alpha</math>
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>H(\alpha)</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\beta)</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi)</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>F(\theta_{x_t})</math>
| |
| |}
| |
| | |
| ===Examples===
| |
| The following mathematical descriptions are fully written out and explained, for ease of implementation.
| |
| | |
| A typical non-Bayesian HMM with Gaussian observations looks like this:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of states
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N, j=1 \dots N}</math>|| ||<math>=</math>|| ||probability of transition from state <math>i</math> to state <math>j</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{i,1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>\mu_{i=1 \dots N}</math>|| ||<math>=</math>|| ||mean of observations associated with state <math>i</math>
| |
| |-
| |
| | || ||<math>\sigma^2_{i=1 \dots N}</math>|| ||<math>=</math>|| ||variance of observations associated with state <math>i</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||state of observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\mathcal{N}(\mu_{x_t}, \sigma_{x_t}^2)</math>
| |
| |}
| |
| | |
| A typical Bayesian HMM with Gaussian observations looks like this:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of states
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N, j=1 \dots N}</math>|| ||<math>=</math>|| ||probability of transition from state <math>i</math> to state <math>j</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{i,1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>\mu_{i=1 \dots N}</math>|| ||<math>=</math>|| ||mean of observations associated with state <math>i</math>
| |
| |-
| |
| | || ||<math>\sigma^2_{i=1 \dots N}</math>|| ||<math>=</math>|| ||variance of observations associated with state <math>i</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||state of observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||concentration hyperparameter controlling the density of the transition matrix
| |
| |-
| |
| | || ||<math>\mu_0, \lambda</math>|| ||<math>=</math>|| ||shared hyperparameters of the means for each state
| |
| |-
| |
| | || ||<math>\nu, \sigma_0^2</math>|| ||<math>=</math>|| ||shared hyperparameters of the variances for each state
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\beta)</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>\mu_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\mathcal{N}(\mu_0, \lambda\sigma_i^2)</math>
| |
| |-
| |
| | || ||<math>\sigma_{i=1 \dots N}^2</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Inverse-Gamma}(\nu, \sigma_0^2)</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\mathcal{N}(\mu_{x_t}, \sigma_{x_t}^2)</math>
| |
| |}
| |
| | |
| A typical non-Bayesian HMM with categorical observations looks like this:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of states
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N, j=1 \dots N}</math>|| ||<math>=</math>|| ||probability of transition from state <math>i</math> to state <math>j</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{i,1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>V</math>|| ||<math>=</math>|| ||dimension of categorical observations, e.g. size of word vocabulary
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N, j=1 \dots V}</math>|| ||<math>=</math>|| ||probability for state <math>i</math> of observing the <math>j</math>th item
| |
| |-
| |
| | || ||<math>\boldsymbol\theta_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>V</math>-dimensional vector, composed of <math>\theta_{i,1 \dots V}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||state of observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\text{Categorical}(\boldsymbol\theta_{x_t})</math>
| |
| |}
| |
| | |
| A typical Bayesian HMM with categorical observations looks like this:
| |
| | |
| {|
| |
| | || ||<math>N</math>|| ||<math>=</math>|| ||number of states
| |
| |-
| |
| | || ||<math>T</math>|| ||<math>=</math>|| ||number of observations
| |
| |-
| |
| | || ||<math>\phi_{i=1 \dots N, j=1 \dots N}</math>|| ||<math>=</math>|| ||probability of transition from state <math>i</math> to state <math>j</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector, composed of <math>\phi_{i,1 \dots N}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>V</math>|| ||<math>=</math>|| ||dimension of categorical observations, e.g. size of word vocabulary
| |
| |-
| |
| | || ||<math>\theta_{i=1 \dots N, j=1 \dots V}</math>|| ||<math>=</math>|| ||probability for state <math>i</math> of observing the <math>j</math>th item
| |
| |-
| |
| | || ||<math>\boldsymbol\theta_{i=1 \dots N}</math>|| ||<math>=</math>|| ||<math>V</math>-dimensional vector, composed of <math>\theta_{i,1 \dots V}</math>; must sum to <math>1</math>
| |
| |-
| |
| | || ||<math>x_{t=1 \dots T}</math>|| ||<math>=</math>|| ||state of observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>=</math>|| ||observation at time <math>t</math>
| |
| |-
| |
| | || ||<math>\alpha</math>|| ||<math>=</math>|| ||shared concentration hyperparameter of <math>\boldsymbol\theta</math> for each state
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||concentration hyperparameter controlling the density of the transition matrix
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\beta)</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\theta_{1 \dots V}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_V(\alpha)</math>
| |
| |-
| |
| | || ||<math>x_{t=2 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\phi_{x_{t-1}})</math>
| |
| |-
| |
| | || ||<math>y_{t=1 \dots T}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Categorical}(\boldsymbol\theta_{x_t})</math>
| |
| |}
| |
| | |
| Note that in the above Bayesian characterizations, <math>\beta</math> (a [[concentration parameter]]) controls the density of the transition matrix. That is, with a high value of <math>\beta</math> (significantly above 1), the probabilities controlling the transition out of a particular state will all be similar, meaning there will be a significantly probability of transitioning to any of the other states. In other words, the path followed by the Markov chain of hidden states will be highly random. With a low value of <math>\beta</math> (significantly below 1), only a small number of the possible transitions out of a given state will have significant probability, meaning that the path followed by the hidden states will be somewhat predictable.
| |
| | |
| ===A two-level Bayesian HMM===
| |
| An alternative for the above two Bayesian examples would be to add another level of prior parameters for the transition matrix. That is, replace the lines
| |
| | |
| {|
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||concentration hyperparameter controlling the density of the transition matrix
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\beta)</math>
| |
| |}
| |
| | |
| with the following:
| |
| | |
| {|
| |
| |-
| |
| | || ||<math>\gamma</math>|| ||<math>=</math>|| ||concentration hyperparameter controlling how many states are intrinsically likely
| |
| |-
| |
| | || ||<math>\beta</math>|| ||<math>=</math>|| ||concentration hyperparameter controlling the density of the transition matrix
| |
| |-
| |
| | || ||<math>\boldsymbol\eta</math>|| ||<math>=</math>|| ||<math>N</math>-dimensional vector of probabilities, specifying the intrinsic probability of a given state
| |
| |-
| |
| | || ||<math>\boldsymbol\eta</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Symmetric-Dirichlet}_N(\gamma)</math>
| |
| |-
| |
| | || ||<math>\boldsymbol\phi_{i=1 \dots N}</math>|| ||<math>\sim</math>|| ||<math>\operatorname{Dirichlet}_N(\beta N \boldsymbol\eta)</math>
| |
| |}
| |
| | |
| What this means is the following:
| |
| # <math>\boldsymbol\eta</math> is a [[probability distribution]] over states, specifying which states are inherently likely. The greater the probability of a given state in this vector, the more likely is a transition to that state (regardless of the starting state).
| |
| # <math>\gamma</math> controls the density of <math>\boldsymbol\eta</math>. Values significantly above 1 cause a dense vector where all states will have similar [[prior probability|prior probabilities]]. Values significantly below 1 cause a sparse vector where only a few states are inherently likely (have prior probabilities significantly above 0).
| |
| # <math>\beta</math> controls the density of the transition matrix, or more specifically, the density of the ''N'' different probability vectors <math>\boldsymbol\phi_{i=1 \dots N}</math> specifying the probability of transitions out of state ''i'' to any other state.
| |
| | |
| Imagine that the value of <math>\beta</math> is significantly above 1. Then the different <math>\boldsymbol\phi</math> vectors will be dense, i.e. the probability mass will be spread out fairly evenly over all states. However, to the extent that this mass is unevenly spread, <math>\boldsymbol\eta</math> controls which states are likely to get more mass than others.
| |
| | |
| Now, imagine instead that <math>\beta</math> is significantly below 1. This will make the <math>\boldsymbol\phi</math> vectors sparse, i.e. almost all the probability mass is distributed over a small number of states, and for the rest, a transition to that state will be very unlikely. Notice that there are different <math>\boldsymbol\phi</math> vectors for each starting state, and so even if all the vectors are sparse, different vectors may distribute the mass to different ending states. However, for all of the vectors, <math>\boldsymbol\eta</math> controls which ending states are likely to get mass assigned to them. For example, if <math>\beta</math> is 0.1, then each <math>\boldsymbol\phi</math> will be sparse and, for any given starting state ''i'', the set of states <math>\mathbf{J}_i</math> to which transitions are likely to occur will be very small, typically having only one or two members. Now, if the probabilities in <math>\boldsymbol\eta</math> are all the same (or equivalently, one of the above models without <math>\boldsymbol\eta</math> is used), then for different ''i'', there will be different states in the corresponding <math>\mathbf{J}_i</math>, so that all states are equally likely to occur in any given <math>\mathbf{J}_i</math>. On the other hand, if the values in <math>\boldsymbol\eta</math> are unbalanced, so that one state has a much higher probability than others, almost all <math>\mathbf{J}_i</math> will contain this state; hence, regardless of the starting state, transitions will nearly always occur to this given state.
| |
| | |
| Hence, a two-level model such as just described allows independent control over (1) the overall density of the transition matrix, and (2) the density of states to which transitions are likely (i.e. the density of the prior distribution of states in any particular hidden variable <math>x_i</math>). In both cases this is done while still assuming ignorance over which particular states are more likely than others. If it is desired to inject this information into the model, the probability vector <math>\boldsymbol\eta</math> can be directly specified; or, if there is less certainty about these relative probabilities, a non-symmetric [[Dirichlet distribution]] can be used as the prior distribution over <math>\boldsymbol\eta</math>. That is, instead of using a symmetric Dirichlet distribution with a single parameter <math>\gamma</math> (or equivalently, a general Dirichlet with a vector all of whose values are equal to <math>\gamma</math>), use a general Dirichlet with values that are variously greater or less than <math>\gamma</math>, according to which state is more or less preferred.
| |
| | |
| ==Applications==
| |
| HMMs can be applied in many fields where the goal is to recover a data sequence that is not immediately observable (but other data that depends on the sequence is). Applications include:
| |
| | |
| * [[Cryptanalysis]]
| |
| * [[Speech recognition]]
| |
| * [[Speech synthesis]]
| |
| * [[Part-of-speech tagging]]
| |
| * [[Machine translation]]
| |
| * [[Partial discharge]]
| |
| * [[Gene prediction]]
| |
| * [[sequence alignment|Alignment of bio-sequences]]
| |
| * [[Time series|Time Series Analysis]]
| |
| * [[Human Activity recognition]]<ref>Piyathilaka, L.; Kodagoda, S., "[http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6566433&isnumber=6566328 Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features]," Industrial Electronics and Applications (ICIEA), 2013 8th IEEE Conference on , vol., no., pp.567,572, 19-21 June 2013.doi: 10.1109/ICIEA.2013.6566433</ref>
| |
| * [[Protein folding]]<ref>{{cite doi|10.1126/science.1207598}}</ref>
| |
| * Metamorphic Virus Detection<ref>{{cite doi|10.1007/s11416-006-0028-7}}</ref>
| |
| * DNA Motif Discovery<ref>{{cite doi|10.1093/nar/gkt574}}</ref>
| |
| | |
| == History ==
| |
| The forward and backward recursions used in HMM as well as computations of marginal smoothing probabilities were first described by [[Ruslan L. Stratonovich]] in 1960<ref name="Stratonovich1960"/> (pages 160—162) and in the late 1950s in his papers in Russian.
| |
| The Hidden Markov Models were later described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was [[speech recognition]], starting in the mid-1970s.<ref>{{cite doi|10.1109/TASSP.1975.1162650}}</ref><ref>{{cite doi|10.1109/TIT.1975.1055384}}</ref><ref>{{cite book |author=[[Xuedong Huang]], M. Jack, and Y. Ariki |title=Hidden Markov Models for Speech Recognition |publisher=Edinburgh University Press |year=1990|isbn=0-7486-0162-7 }}</ref><ref>{{cite book |author=[[Xuedong Huang]], Alex Acero, and Hsiao-Wuen Hon |title=Spoken Language Processing |publisher=Prentice Hall |year=2001|isbn=0-13-022616-5}}</ref>
| |
| | |
| In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences,<ref>{{cite journal|doi=10.1016/0022-2836(86)90289-5|author=M. Bishop and E. Thompson|title=Maximum Likelihood Alignment of DNA Sequences|journal=Journal of Molecular Biology|volume=190|issue=2|pages=159–165|year=1986|pmid=3641921}}</ref> in particular [[DNA]]. Since then, they have become ubiquitous in the field of [[bioinformatics]].<ref>{{Cite book
| |
| | author = Richard Durbin, Sean R. Eddy, [[Anders Krogh]], Graeme Mitchison
| |
| | title = Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids
| |
| | publisher = [[Cambridge University Press]]
| |
| | year = 1999
| |
| | isbn = 0-521-62971-3
| |
| }}</ref>
| |
| | |
| ==Types==
| |
| Hidden Markov models can model complex [[Markov]] processes where the states emit the observations according to some probability distribution. One such example of distribution is [[Gaussian]] distribution, in such a Hidden Markov Model the states output is represented by a [[Gaussian]] distribution.
| |
| | |
| Moreover it could represent even more complex behavior when the output of the states is represented as mixture of two or more Gaussians, in which case the [[probability]] of generating an observation is the product of the probability of first selecting one of the Gaussians and the probability of generating that observation from that Gaussian.
| |
| | |
| ==Extensions==
| |
| In the hidden Markov models considered above, the state space of the hidden variables is discrete, while the observations themselves can either be discrete (typically generated from a [[categorical distribution]]) or continuous (typically from a [[Gaussian distribution]]). Hidden Markov models can also be generalized to allow continuous state spaces. Examples of such models are those where the Markov process over hidden variables is a [[linear dynamical system]], with a linear relationship among related variables and where all hidden and observed variables follow a [[Gaussian distribution]]. In simple cases, such as the linear dynamical system just mentioned, exact inference is tractable (in this case, using the [[Kalman filter]]); however, in general, exact inference in HMMs with continuous latent variables is infeasible, and approximate methods must be used, such as the [[extended Kalman filter]] or the [[particle filter]].
| |
| | |
| Hidden Markov models are [[generative model]]s, in which the [[joint distribution]] of observations and hidden states, or equivalently both the [[prior distribution]] of hidden states (the ''transition probabilities'') and [[conditional distribution]] of observations given states (the ''emission probabilities''), is modeled. The above algorithms implicitly assume a [[Uniform distribution (continuous)|uniform]] prior distribution over the transition probabilities. However, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the [[Dirichlet distribution]], which is the [[conjugate prior]] distribution of the categorical distribution. Typically, a symmetric Dirichlet distribution is chosen, reflecting ignorance about which states are inherently more likely than others. The single parameter of this distribution (termed the ''concentration parameter'') controls the relative density or sparseness of the resulting transition matrix. A choice of 1 yields a uniform distribution. Values greater than 1 produce a dense matrix, in which the transition probabilities between pairs of states are likely to be nearly equal. Values less than 1 result in a sparse matrix in which, for each given source state, only a small number of destination states have non-negligible transition probabilities. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in [[unsupervised learning|unsupervised]] [[part-of-speech tagging]], where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using [[Gibbs sampling]] or extended versions of the [[expectation-maximization algorithm]].
| |
| | |
| An extension of the previously described hidden Markov models with [[Dirichlet distribution|Dirichlet]] priors uses a [[Dirichlet process]] in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a ''hierarchical Dirichlet process hidden Markov model'', or ''HDP-HMM'' for short. It was originally described under the name "Infinite Hidden Markov Model"{{ref|Beal, Matthew J., Zoubin Ghahramani, and Carl Edward Rasmussen. "The infinite hidden Markov model." Advances in neural information processing systems 14 (2002): 577-584.}} and was further formalized in{{ref|Teh, Yee Whye, et al. "Hierarchical dirichlet processes." Journal of the American Statistical Association 101.476 (2006).}}.
| |
| | |
| A different type of extension uses a [[discriminative model]] in place of the [[generative model]] of standard HMMs. This type of model directly models the conditional distribution of the hidden states given the observations, rather than modeling the joint distribution. An example of this model is the so-called ''[[maximum entropy Markov model]]'' (MEMM), which models the conditional distribution of the states using [[logistic regression]] (also known as a "[[Maximum entropy probability distribution|maximum entropy]] model"). The advantage of this type of model is that arbitrary features (i.e. functions) of the observations can be modeled, allowing domain-specific knowledge of the problem at hand to be injected into the model. Models of this sort are not limited to modeling direct dependencies between a hidden state and its associated observation; rather, features of nearby observations, of combinations of the associated observation and nearby observations, or in fact of arbitrary observations at any distance from a given hidden state can be included in the process used to determine the value of a hidden state. Furthermore, there is no need for these features to be [[statistically independent]] of each other, as would be the case if such features were used in a generative model. Finally, arbitrary features over pairs of adjacent hidden states can be used rather than simple transition probabilities. The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities.
| |
| | |
| A variant of the previously described discriminative model is the linear-chain [[conditional random field]]. This uses an undirected graphical model (aka [[Markov random field]]) rather than the directed graphical models of MEMM's and similar models. The advantage of this type of model is that it does not suffer from the so-called ''label bias'' problem of MEMM's, and thus may make more accurate predictions. The disadvantage is that training can be slower than for MEMM's.
| |
| | |
| Yet another variant is the ''factorial hidden Markov model'', which allows for a single observation to be conditioned on the corresponding hidden variables of a set of <math>K</math> independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with <math>N^K</math> states (assuming there are <math>N</math> states for each chain), and therefore, learning in such a model is difficult: for a sequence of length <math>T</math>, a straightforward Viterbi algorithm has complexity <math>O(N^{2K} \, T)</math>. To find an exact solution, a junction tree algorithm could be used, but it results in an <math>O(N^{K+1} \, K \, T)</math> complexity. In practice, approximate techniques, such as variational approaches, could be used.<ref>{{cite journal|last=Ghahramani|first=Zoubin|coauthors=Jordan, Michael I.|title=Factorial Hidden Markov Models|journal=Machine Learning|year=1997|volume=29|issue=2/3|pages=245–273|doi=10.1023/A:1007425814087}}</ref>
| |
| | |
| All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general <math>K</math> adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an <math>O(N^K \, T)</math> running time, for <math>K</math> adjacent states and <math>T</math> total observations (i.e. a length-<math>T</math> Markov chain).
| |
| | |
| Another recent extension is the ''triplet Markov model'',<ref name="TMM">[http://www.sciencedirect.com/science/article/pii/S1631073X02024627 Triplet Markov Chain], W. Pieczynski,Chaînes de Markov Triplet, Triplet Markov Chains, Comptes Rendus de l’Académie des Sciences – Mathématique, Série I, Vol. 335, No. 3, pp. 275-278, 2002.</ref> in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the ''theory of evidence'' and the ''triplet Markov models'' <ref name="TMMEV">[http://www.sciencedirect.com/science/article/pii/S0888613X06000375 Pr. Pieczynski], W. Pieczynski, Multisensor triplet Markov chains and theory of evidence, International Journal of Approximate Reasoning, Vol. 45, No. 1, pp. 1-16, 2007.</ref> and which allows to fuse data in Markovian context <ref name="JASP">[http://asp.eurasipjournals.com/content/pdf/1687-6180-2012-134.pdf Boudaren et al.], M. Y. Boudaren, E. Monfrini, W. Pieczynski, and A. Aissani, Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context, EURASIP Journal on Advances in Signal Processing, No. 134, 2012.</ref> and to model nonstationary data.<ref name="TSP">[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1468502&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dlanchantin+pieczynski Lanchantin et al.], P. Lanchantin and W. Pieczynski, Unsupervised restoration of hidden non stationary Markov chain using evidential priors, IEEE Trans. on Signal Processing, Vol. 53, No. 8, pp. 3091-3098, 2005.</ref><ref name="SPL">[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6244854&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dboudaren Boudaren et al.], M. Y. Boudaren, E. Monfrini, and W. Pieczynski, Unsupervised segmentation of random discrete data hidden with switching noise distributions, IEEE Signal Processing Letters, Vol. 19, No. 10, pp. 619-622, October 2012.</ref>
| |
| | |
| ==See also==
| |
| {{Div col|cols=3}}
| |
| * [[Andrey Markov]]
| |
| * [[Baum–Welch algorithm]]
| |
| * [[Bayesian inference]]
| |
| * [[Bayesian programming]]
| |
| * [[Conditional random field]]
| |
| * [[Estimation theory]]
| |
| * [[HHpred / HHsearch]] free server and software for protein sequence searching
| |
| * [[HMMER]], a free hidden Markov model program for protein sequence analysis
| |
| * [[Hidden Bernoulli model]]
| |
| * [[Hidden semi-Markov model]]
| |
| * [[Hierarchical hidden Markov model]]
| |
| * [[Layered hidden Markov model]]
| |
| * [[Poisson hidden Markov model]]
| |
| * [[Sequential dynamical system]]
| |
| * [[Stochastic context-free grammar]]
| |
| * [[Time Series| Time Series Analysis]]
| |
| * [[Variable-order Markov model]]
| |
| * [[Viterbi algorithm]]
| |
| {{Div col end}}
| |
| | |
| ==References==
| |
| {{Reflist|30em}}
| |
| | |
| ==External links==
| |
| | |
| ===Concepts===
| |
| * Teif V. B. and K. Rippe (2010) Statistical–mechanical lattice models for protein–DNA binding in chromatin. ''J. Phys.: Condens. Matter'', '''22''', 414105, [[doi:10.1088/0953-8984/22/41/414105|http://iopscience.iop.org/0953-8984/22/41/414105]]
| |
| * [http://www.cs.sjsu.edu/~stamp/RUA/HMM.pdf A Revealing Introduction to Hidden Markov Models] by Mark Stamp, San Jose State University.
| |
| * [http://www.ee.washington.edu/research/guptalab/publications/EMbookChenGupta2010.pdf Fitting HMM's with expectation-maximization - complete derivation]
| |
| *[http://www.tristanfletcher.co.uk/SAR%20HMM.pdf Switching Autoregressive Hidden Markov Model (SAR HMM)]
| |
| *[http://www.comp.leeds.ac.uk/roger/HiddenMarkovModels/html_dev/main.html A step-by-step tutorial on HMMs] ''(University of Leeds)''
| |
| *[http://www.cs.brown.edu/research/ai/dynamics/tutorial/Documents/HiddenMarkovModels.html Hidden Markov Models] ''(an exposition using basic mathematics)''
| |
| * [http://jedlik.phy.bme.hu/~gerjanos/HMM/node2.html Hidden Markov Models] ''(by Narada Warakagoda)''
| |
| * Hidden Markov Models: Fundamentals and Applications [http://www.eecis.udel.edu/~lliao/cis841s06/hmmtutorialpart1.pdf Part 1], [http://www.eecis.udel.edu/~lliao/cis841s06/hmmtutorialpart2.pdf Part 2] ''(by V. Petrushin)''
| |
| * Lecture on a Spreadsheet by Jason Eisner, [http://videolectures.net/hltss2010_eisner_plm/video/2/ Video] and [http://www.cs.jhu.edu/~jason/papers/eisner.hmm.xls interactive spreadsheet]
| |
| | |
| ===Software===
| |
| * [http://karamanov.com/HMMdotEM HMMdotEM] General Discrete-State HMM Toolbox (released under [[BSD_licenses#3-clause|3-clause BSD-like License]], ''Currently only [[Matlab]]'')
| |
| * [http://www.cs.ubc.ca/~murphyk/Software/HMM/hmm.html Hidden Markov Model (HMM) Toolbox for Matlab] ''(by Kevin Murphy)''
| |
| * [http://htk.eng.cam.ac.uk/ Hidden Markov Model Toolkit (HTK)] ''(a portable toolkit for building and manipulating hidden Markov models)''
| |
| * [http://cran.r-project.org/web/packages/HMM/index.html Hidden Markov Model R-Package] to set up, apply and make inference with discrete time and discrete space Hidden Markov Models
| |
| * [http://birc.au.dk/~asand/hmmlib HMMlib] ''(an optimized library for work with general (discrete) hidden Markov models)''
| |
| * [http://birc.au.dk/~asand/parredhmmlib parredHMMlib] ''(a parallel implementation of the forward algorithm and the Viterbi algorithm. Extremely fast for HMMs with small state spaces)''
| |
| * [http://birc.au.dk/software/zipHMM zipHMMlib] ''(a library for general (discrete) hidden Markov models, exploiting repetitions in the input sequence to greatly speed up the forward algorithm. Implementation of the posterior decoding algorithm and the Viterbi algorithm are also provided.)''
| |
| * [http://www.ghmm.org GHMM Library] ''(home page of the GHMM Library project)''
| |
| * [http://code.google.com/p/cl-hmm/ CL-HMM Library] ''(HMM Library for Common Lisp)''
| |
| * [http://jahmm.googlecode.com/ Jahmm Java Library] ''(general-purpose Java library)''
| |
| * [http://www.kanungo.com/software/software.html HMM and other statistical programs] ''(Implementation in C by Tapas Kanungo)''
| |
| * [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmm The hmm package] A [http://www.haskell.org Haskell] library for working with Hidden Markov Models.
| |
| * [http://gt2k.cc.gatech.edu/ GT2K] Georgia Tech Gesture Toolkit (referred to as GT2K)
| |
| * [http://www.lwebzem.com/cgi-bin/courses/hidden_markov_model_online.cgi Hidden Markov Models -online calculator for HMM - Viterbi path and probabilities. Examples with perl source code.]
| |
| * [http://sourceforge.net/projects/cvhmm/ A discrete Hidden Markov Model class, based on OpenCV.]
| |
| * [http://cran.r-project.org/web/packages/depmixS4/index.html depmixS4] R-Package (Hidden Markov Models of GLMs and Other Distributions in S4 )
| |
| * [[MLPACK (C++ library)|MLPACK]] contains a C++ implementation of HMMs
| |
| | |
| | |
| {{Stochastic processes}}
| |
| | |
| [[Category:Bioinformatics]]
| |
| [[Category:Hidden Markov models| ]]
| |
| [[Category:Markov models]]
| |