next up previous
Next: Learning mixtures of trees Up: Learning of MT models Previous: Learning of MT models

Running time

Computing the likelihood of a data point under a tree distribution (in the directed tree representation) takes $n-1\;=\;{\cal O}(n)$ multiplications. Hence, the E step requires ${\cal O}(mnN)$ floating point operations. As for the M step, the most computationally expensive phase is the computation of the marginals $P_{uv}^k$ for the $k$th tree. This step has time complexity ${\cal O}(mn^2N)$. Another ${\cal O}(mn^2r_{MAX}^2)$ and ${\cal O}(mn^2)$ are required at each iteration for the mutual informations and for the $m$ runs of the MWST algorithm. Finally we need ${\cal O}(mnr^2_{MAX})$ for computing the tree parameters in the directed representation. The total running time per EM iteration is thus

\begin{displaymath}
{\cal O}(mn^2N+mn^2r_{MAX}^2).
\end{displaymath}

The algorithm is polynomial (per iteration) in the dimension of the domain, the number of components and the size of the data set. The space complexity is also polynomial, and is dominated by ${\cal O}(n^2 r^2_{MAX})$, the space needed to store the pairwise marginal tables $P^k_{uv}$ (the tables can be overwritten by successive values of $k$).

Journal of Machine Learning Research 2000-10-19