This semester I will be attending the doctoral course MA8702 – Advanced Modern Statistical Methods with the excellent Prof. Håvard Rue. It will be course about statistical models defined over sparse structures (chains and graphs). We will start with Hidden Markov Chains and after go to Gaussian Markov Random Fields, Latent Gaussian Models and approximate inference with Integrated Nested Laplace Approximation (INLA). All this models are interesting for my research objective of developing sound latent models for recommender systems and I am really happy of taking this course with this great teacher and researcher. So, I will try to cover some of the material of the course, starting from what we saw in the first lecture: exact recurrence for Hidden Markov Chains and dynamic programming. In other words, general equations for predictions, filtering, smoothing, sampling, mode and marginal likelihood calculation of state-space model with latent variables. We will start by introduction the general model and specifying how to obtain the prediction and filtering equation.
- Markovian property:
, with
are observed and
are latent, so
is always known.
- If we know
than no other variable will add any information to the conditional distribution of
.
With the last information of the model we will demonstrate an important relationship between and
:
Claim 1: (in other words, for the conditional distribution of observation
, knowing about other latent states does not add more information when we know latent state
)
- Proof: By the definition of conditional probability we have,
. Using the index
to indicate the set of indexes
with the exception of
we may rearrange the joint distribution
. Note that
may be taking off the conditional distribution
since given
,
is irrelevant for the distribution of
, so
. Putting it all together
We should say that we have two aims, compute:
- Prediction(t):
,
(usually 1)
- Filtering(t):
1) Prediction: starting with prediction for (easy to generalize for
, since in this case we just need to multiply all the conditional distribution of
, between t and t+s and marginalize in all variable except
and
)
Notice that we are marginalizing the state variable while we multiple its filtering distribution
with the state transtion distribution
. The same derivation would be valid for
, with the additional step of calculating the transition distribution
2) Filtering: . Focusing on the joint, we have
, implying that
Observation 1: and
.
Observation 2: is the prediction of the previous step (prediction(t-1)).
Now we are able to have a final recurrence equation for the filtering (using the previous time step result of the prediction distribution):
If we maintain a table with the relevant information of the distribution Prediction(t) and Filtering(t) ( and
) we will be able to calculate the predictive distribution and filtering distribution in time
(for a chain of T variables and K discrete values). Note that if we using s>1 than the complexity if
. Needless to say that the integrals could be replaced by sums if the distribution is discrete, but we are using integral since we are considering a generic case. Also we described the recurrence in alternate form at each step estimating the prediction and filtering at the same time, but this same computations could be done independently, but we would be just wasting efforts since they can be done at once. Just for the sake of completeness the independent recurrence equations:
1)
2)
In the next posts of this series we will develop the recurrence equations for smoothing, sampling and marginal likelihood. We will also have an application of this equation in the Kalman Filter setting (Gaussian transition and gaussian noise).
[…] the same series HMM (part I): recurrence equations for filtering and prediction) Our starting point is the marginal probability of given all the observations […]