next up previous
Next: Sampling. Up: Marginalization, inference and sampling Previous: Marginalization.


Let $V'=x_{V'}$ be the evidence. Then the probability of the hidden variable given evidence $V'=x_{V'}$ is obtained by applying Bayes' rule as follows:

Pr[ z=k \vert V'=x_{V'}]\;=\;\frac{ \lambda_k T^k_{V'}(V'=x_{V'})}
{\sum_{k'}\lambda_{k'} T^{k'}_{V'}(V'=x_{V'})}.

In particular, when we observe all but the choice variable, i.e., $V'\;\equiv\;V$ and $x_{V'}\equiv x$, we obtain the posterior probability distribution of $z$:
Pr[z=k\vert V=x]\;\equiv\;Pr[z=k\vert x]\;=\;
\frac{\lambda_k T^k(x)}{\sum_{k'}\lambda_{k'}T^{k'}(x)}.
\end{displaymath} (4)

The probability distribution of a given subset of $V$ given the evidence is

Q_{A\vert V'}(x_A\vert x_{V'})\;=\;
\frac{\sum_{k=1}^m \la...
... Pr[z=k\vert V'=x_{V'}]  T^k_{A\vert V'}( x_A \vert x_{V'}).

Thus the result is again a mixture of the results of inference procedures run on the component trees.

Journal of Machine Learning Research 2000-10-19