Maybe you’ve read every single article on Medium about avoiding procrastination or you’re worried that those cute dog gifs are using up too much CPU power.Forsaking both, I’ve written a brief guide about how to implement Gibbs sampling for Bayesian linear regression in Python.However, we might also consider approximate decoding/inference/sampling methods where the conditional UGM is more complicated, but still simple enough that we can do exact calculations.We will refer to methods that use this simple but powerful idea as block approximate decoding/inference/sampling methods.If you find any mistakes or if anything is unclear, please get in touch: kieranc [at] Here we are interested in Gibbs sampling for normal linear regression with one independent variable.Consider the problem of sampling from $p(\mathbf, \mathbf)$ using the Metropolis or Metropolis-Hastings (MH) algorithm.I can either propose samples for $p(\mathbf, \mathbf)$ directly, or I could do a blocked version of that and alternately propose samples for $p(\mathbf \mid \mathbf)$ and $p(\mathbf \mid \mathbf)$, which I believe is also called .

We wish to find the posterior distributions of the coefficients \(\beta_0\) (the intercept), \(\beta_1\) (the gradient) and of the precision \(\tau\), which is the reciprocal of the variance.

The approximate inference methods from the previous demo correspond to the special case where each variable forms its own block.

By conditioning on all variables outside the block, it is straightforward to do exact calculations within the block.

A key aspect of the approximate decoding/inference/sampling methods that we have discussed up to this point is that they are based on performing local calculations.

In most cases, this involved updating the state of a single node, conditioning on the values of its neighbors in the graph.