Probabilistic Graphical Models

Probabilistic graphical models unite probability and graph theory and allow to efficiently formalize both static and dynamic, as well as linear and nonlinear systems and processes. Many well-known statistical models, e.g. mixture models, factor analysis, hidden Markov models, Kalman filters, Bayesian networks, Boltzmann machines, the Ising model, just to name a few, can be represented in the framework of graphical models. This framework provides techniques for inference (sum/max-product algorithm) and learning. The flexibility in representing the structure of the considered phenomenon makes graphical models applicable in many research areas.
There are two basic approaches for learning graphical models in the scientific community: generative and discriminative learning. Unfortunately, generative learning does not always provide good results. Discriminative learning is known to be more accurate for classification. In contrast to discriminative models (e.g. neural networks, support vector machines), the benefit of discriminatively learned generative graphical models (e.g. Bayesian networks) still maintains, especially, to work with missing variables by marginalizing the unknown ones. In particular, we have developed methods for generative and discriminative (e.g. max-margin) structure and parameter learning for Bayesian network classifiers. 
Furthermore, graphical models have been applied to various speech and image processing applications.

Contact: 
Research Area: