Signal Processing and Speech Communication Laboratory
homeevents › PhD defense Thomas Buchgraber

PhD defense Thomas Buchgraber

Start date/time
Mon Apr 22 09:30:00 2013
End date/time
Mon Apr 22 09:30:00 2013
Location
IDEG134, Inffeldgasse 16c, EG
Contact

Chairman: Univ.-Prof.Dr. L. FICKERT
Examiner : Univ.-Prof.Dr. G. KUBIN
Examiner: Assoc.Prof. J. LARSEN, PhD (Technical University Denmark)

Abstract:
Variational Sparse Bayesian Learning: Centralized and Distributed Processing

In this thesis we investigate centralized and distributed variants of sparse Bayesian learning (SBL), an effective probabilistic regression method used in machine learning. Since inference in an SBL model is not tractable in closed form, approximations are needed. We focus on the variational Bayesian approximation, as opposed to others used in the literature, for three reasons: First, it is a flexible general framework for approximate Bayesian inference that estimates probability densities including point estimates as a special case. Second, it has guaranteed convergence properties. And third, it is a deterministic approximation concept that is even applicable for high dimensional problems where non-deterministic sampling methods may be prohibitive. We resolve some inconsistencies in the literature involved in other SBL approximation techniques with regard to a proper Bayesian treatment and the incorporation of a very desired property, namely scale invariance. More specifically, among all discussed methods, we show that the variational Bayesian approximation is the only approach that can be consistently derived from scale-invariant model assumptions. A novel fast SBL concept based on fixed-point analysis of the variational update expressions is presented as a variational counterpart to fast marginal likelihood maximization, a widely known SBL variant. Within this fast framework, which we denote as fast variational SBL (FV-SBL), we furthermore show how to incorporate an intuitive trade-off parameter that allows to balance sparsity versus accuracy. We compare the performance of all methods using real and synthetic data, where we show that FV-SBL converges several orders of magnitude faster than ordinary variational SBL. Large-scale sensor networks allow to monitor spatial phenomena with unprecedented resolution. Current research focuses on distributed processing performed directly on the sensor nodes with only local communications and no need for data fusion, i.e., the data is not collected and processed at a single point which would be critical to fail. In the SBL context, our objective is to learn a compact model of the true underlying spatially sampled field function based on noisy sensor measurements in a distributed manner. Finding a sparse representation of the data enables its efficient handling and processing, which is a desired property especially for wireless sensor network applications due to energy and bandwidth constraints. We analyze two distributed variants of SBL, which we denote as variational distributed SBL (V-dSBL) and fast variational distributed SBL (FV-dSBL). Both algorithms are based on consensus methods, which perform distributed averaging based on local message passing between neighboring sensors. V-dSBL considers a spatially distributed probabilistic model that is a combination of SBL and consensus propagation. For inference, a variational Bayesian approximation with Gaussian loopy belief propagation as a subroutine is used. The messages involved in this subroutine are directly sent between adjacent sensors and have a communication complexity that is quadratic in the number of model components. In FV-dSBL, the FV-SBL fixed-point solution is collaboratively calculated using any type of consensus algorithm, which results in a communication complexity that is only linear in the number of model components for each message. Furthermore, in FV-dSBL, we can test and add new components to the model. The computational complexity at each sensor for a single FV-dSBL test is quadratic in the number of components, whereas in V-dSBL it is cubic for each single message in Gaussian loopy belief propagation. In real- and synthetic-data experiments, we compare both distributed algorithms to their centralized counterparts, where similar performance is obtained in most of the cases.