Markov network

A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set $$\mathcal{X}$$ of random variables. A Markov network is similar to a Bayesian network in its representation of dependencies. It can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies).

Formal Definition
Formally, a Markov network consists of:


 * an undirected graph G = (V,E), where each vertex v &isin;V represents a random variable in $$\mathcal{X}$$ and each edge {u,v} &isin; E represents a dependency between the random variables u and v,
 * a set of potential functions $$\phi_k$$ (also called factors or clique potentials), where each $$\phi_k$$ has the domain of some (sub)clique k in G. Each $$\phi_k$$ is a mapping from possible joint assignments (to the elements of k) to non-negative real numbers.

Joint Distribution Function
The joint distribution represented by a Markov network is given by: $$ P(X=x) = \frac{1}{Z} \prod_{k} \phi_k (x_{ \{ k \}}) $$ where $$x_{ \{ k \}}$$ is the state of the random variables in the kth clique, and the normalizing constant $$Z$$ (also called a partition function), where $$ Z = \sum_{x \isin \mathcal{X}} \prod_{k} \phi_k(x_{ \{ k \} })$$. In practice, a Markov network is often conveniently expressed as a log-linear model, given by $$ P(X=x) = \frac{1}{Z} \exp(\sum_{k} w_k \phi_k (x_{ \{ k \}})) $$ with normalizing constant $$ Z = \sum_{x \isin \mathcal{X}} \exp (\sum_{k} w_k\phi_k(x_{ \{ k \} }))$$. In this context, the $$w_k$$s are weights and the $$\phi_k$$s are functions from some subset of $$x$$ to the reals. These models are especially convenient for their interpretation. A log-linear model can provide a much more compact representation for many distributions, especially when variables have large domains. They are convenient too because their negative log likelihoods are convex. Unfortunately, though the likelihood of a log-linear Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is in general computationally infeasible.

Independencies in a Markov Network
The Markov blanket of a node $$ v_i $$ in a Markov network is defined to be every node with an edge to $$ v_i $$, i.e. all $$v_j$$ such that $$\lbrace v_i, v_j \rbrace \in E$$. Every node $$v$$ in a Markov network is conditionally independent of every other node given the Markov blanket of $$v$$.

Inference
As in a Bayesian network, one may calculate the conditional distribution of a set of nodes $$ V' = \{ v_1 ,..., v_i \} $$ given values to another set of nodes $$ W' = \{ w_1 ,..., w_j \} $$ in the Markov network by summing over all possible assignments to $$u \notin V',W'$$; this is called exact inference. However, exact inference is in general a #P-complete problem, and thus computationally intractable. Approximation techniques such as Markov chain Monte Carlo and loopy belief propagation are more feasible in practice. Some particular subclasses of MRFs, such as trees, have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs which permit efficient MAP, or most likely assignment, inference; examples of these include associative networks.

Conditional Random Fields
One notable variant of a Markov network is a conditional random field, in which each random variable may also be conditioned upon a set of global observations $$o$$. In this model, each function $$\phi_k$$ is a mapping from all assignments to both the clique k and the observations $$o$$ to the nonnegative real numbers. This form of the Markov network may be more appropriate for producing discriminative classifiers, which do not model the distribution over the observations.