Next: Probabilistic Inference Up: Consistent Dependency Networks Previous: Consistent Dependency Networks

## Definition and Basic Properties

We now describe dependency networks more formally. To do so, we begin with some notation. We denote a variable by a capitalized token (e.g., , Age), and the state or value of a corresponding variable by that same token in lower case (e.g., , age). We denote a set of variables by a bold-face capitalized token (e.g., ). We use a corresponding bold-face lower-case token (e.g., ) to denote an assignment of state or value to each variable in a given set. We use to denote the probability that given . We also use to denote the probability distribution for given . Whether refers to a probability or a probability distribution will be clear from context. In this paper, we shall limit our discussion to domains where all variables are discrete and finite valued and where the joint distribution is positive--that is, where all assignments of the domain variables have a non-zero probability. Although much of what we develop can be extended to more general circumstances, the extensions are tedious and we omit them. Given a domain of interest having a set of finite variables with a positive joint distribution , a consistent dependency network for is a pair where is a (cyclic) directed graph and is a set of conditional probability distributions. Each node in corresponds to a variable in . We use to refer to both the variable and its corresponding node. The parents of node , denoted , correspond to those variables that satisfy
 (1)

The distributions in are the local probability distributions . The dependency network is consistent in the sense that each local distribution can be obtained (via inference) from the joint distribution . In the next section, we relax this condition. The independencies in a dependency network are precisely those of a Markov network with the same adjacencies. A Markov network for , also known as an undirected graphical model or Markov random field for , is a pair where is an undirected graph and is a set of potential functions, one for each of the maximal cliques in , such that joint distribution has the form
 (2)

where are the variables in clique , (e.g., see Lauritzen, 1996). The following theorem shows that consistent dependency networks and Markov networks have the same representational power.

Theorem 1: The set of positive distributions that can be encoded by a consistent dependency network with graph is equal to the set of positive distributions that can be encoded by a Markov network whose structure has the same adjacencies as .

The two graphical representations are different in that Markov networks quantify dependencies with potential functions, whereas dependency networks use conditional probabilities. We have found the latter to be significantly easier to interpret. The proof of Theorem 1 appears in the Appendix, but it is essentially a restatement of the Hammersley-Clifford theorem (e.g., Besag, 1974). This correspondence is no coincidence. As is discussed in Besag (1974), several researchers who developed the Markov-network representation did so by initially investigating a graphical representation that fits our definition of consistent dependency network. In particular, several researchers including Lévy (1948), Bartlett (1955, Section 2.2), and Brook (1964) considered lattice systems where each variable depended only on its nearest neighbors , and quantified the dependencies within these systems using the conditional probability distributions . They then showed, to various levels of generality, that the only joint distributions mutually consistent with each of the conditional distributions also satisfy Equation 2. Hammersley and Clifford, in a never published manuscript, and Besag (1974) considered the more general case where each variable could have an arbitrary set of parents. They showed that, provided the joint distribution for is positive, any graphical model specifying the independencies in Equation 1 must also satisfy Equation 2. One interesting point is that these researchers argued for the use of conditional distributions to quantify the dependencies. They considered the resulting potential form in Equation 2 to be a mathematical necessity rather than a natural expression of dependency. As we have just discussed, we share this view. The equivalence of consistent dependency networks and Markov networks suggests a straightforward approach for learning a consistent dependency network from exchangeable (i.i.d.) data. Namely, one learns the structure and potentials of a Markov network (e.g., Whittaker, 1990), and then computes (via probabilistic inference) the conditional distributions required by the dependency network. Alternatively, one can learn a related model such as a Bayesian network, decomposable model, or hierarchical log-linear model (see, e.g., Lauritzen, 1996) and convert it to a consistent dependency network. Unfortunately, the conversion process can be computationally expensive in many situations. In the next section, we extend the definition of dependency network to include inconsistent dependency networks and provide algorithms for learning such networks that are more computationally efficient than those just described. In the remainder of this section, we apply well known results about probabilistic inference to consistent dependency networks. This discussion will be useful for our further development of (general) dependency networks.

Next: Probabilistic Inference Up: Consistent Dependency Networks Previous: Consistent Dependency Networks
Journal of Machine Learning Research, 2000-10-19