Signal Processing and Speech Communication Laboratory
hometheses & projects › Self-Confident Belief Propagation: An Approach for Iterative Improvement of the Bethe Approximation

Self-Confident Belief Propagation: An Approach for Iterative Improvement of the Bethe Approximation

Status
Finished
Type
Master/Diploma Thesis
Announcement date
01 Oct 2018
Student
Florian Kulmer
Mentors
Research Areas

Abstract

Probabilistic graphical models (PGMs) are often used to represent probabilistic distributions over many random variables. Belief propagation (BP) is a prominent tool for probabilistic inference. On PGMs without loops BP performs exact inference. But PGMs often contain loops in practice; therefore, exact inference with BP is not possible anymore. Moreover, for PGMs containing loops it is not guaranteed that BP converges at all, and multiple solutions can exist, which are not necessarily accurate.

In this work we introduce self-confident belief propagation (SBP). SBP solves some problems of BP by gradually accounting for the pairwise potentials and iteratively improving the Bethe approximation. On Ising models SBP starts with neglecting the pairwise potentials, and follows a smooth solution path towards the desired solution. The solution of SBP is unique, stable and accurate. Even in the cases where BP does not converge, SBP provides a good solution. Additionally, we provide an adaption of SBP by restricting the runtime. We call this method SBP early stopping (SBP ES ). We evaluate SBP on different PGMs with Ising potentials and show that SBP improves the accuracy of BP significantly. SBP is more accurate whenever BP converges, and obtains a unique, stable and accurate solution whenever BP fails to converge. SBP ES does even improve the performance of SBP. Even compared to Gibbs sampling SBP and SBP ES perform superior in terms of accuracy and runtime.