From belief propagation to approximate message passing in a simple model
The belief propagation (BP) equations are recursive equations for the one dimensional marginals of
a given probability distribution. A key property is that there is a graphical representation of the
dependency structure, the so-called factor graph. If the factor graph is a tree, it is possible to show
that the BP iteration reconstructs exactly the target probability, but in more interesting models
this is only approximately true. On the other hand, the approximate message passing (AMP, also known
as TAP) equations are recursive equations for the means of a target distribution.
This talk will address the following question: can we derive the AMP equations computing the
averages wrt the marginals obtained by BP? In a seminal paper, Donoho, Maleki and Montanari gave a
heuristic argument to answer this question for the $l^1$ minimisation problem in compressed sensing,
namely: minimise the $l^1$ norm of $N$-dimensional vector $x$ given that $y=Ax$, where $A$ is a $m \times N$ random
matrix and $y$ is a fixed $m$-dimensional vector, with $m < N$. Elaborating on their ideas, Arianna Piana and I
gave a rigorous proof of this derivation in the simpler case where the $l^2$ norm is minimised.