Bayesian Methods in Machine Learning
Many machine learning algorithms seek for a mode in a posterior distribution and then make predictions solely based on that mode. These approaches waste lots of information available in the posterior and are prone to overfitting when a bad local mode is found. The full Bayesian approach computes its prediction by an expectation over the posterior. Computing this expectation analytically is in general intractable and subject of current research. The task of the thesis is to develop techniques based on the full Bayesian approach:
- Sampling techniques that generate samples from the posterior distribution. Since each of these samples typically corresponds to an independent model, their individual predictions can be averaged to achieve more robust results.
- Variational inference techniques to approximate the posterior using simpler distributions.
- Extension of existing state-of-the-art models using Bayesian non-parametric methods.
- Enforcing sparsity using Bayesian techniques such as spike-and-slab priors.