When checking the output of Bayesian MCMC model fitting, we need
to ensure that we have enough draws from the posterior
distribution to get good summary statistics - mean, median and
A common way to do this is to calculate the
effective sample size (ESS), denoted
Monte Carlo error
Run the same model with the same data and you will get different values for posterior means for the parameters. The differences should be small, but they are not zero. This is MC error.
MC errors arise whenever results are based on random draws from a distribution: each run produces a different set of random draws and hence (slightly) different values for summary statistics.
The MC error (more properly, the Monte Carlo
standard error) can be estimated from a single run by dividing
the values into batches: a chain with 10,000 values can be
divided into 100 batches each with 100 values. Then we calculate
the mean of each batch. The posterior mean is the mean of the batch means,
and the spread of the batch means indicates the standard error
of the estimate. See Lunn et al (2013) The BUGS Book p 77
for details. This is implemented in WinBUGS and in the
How small should the MC error be?
The size of the MC error is best compared with the uncertainty in the parameter value we are concerned with. This is summarised in the posterior standard deviation (SD).
A common recommendation (eg, Lunn et al, 2013) is that the MC error should be less than 5% of the SD.
This is adequate for the mean, but to get good estimates from the tails of the posterior distribution for the credible interval, values less than 1.5% of the SD are recommended.
MC error gives the information we need
We need to know how precise are our summary statistics. The number of MCMC draws - whether autocorrelated or not - is not the point.
The MCMC functions in the wiqid package will move over to listing MC error as a percentage of the posterior SD instead of n.eff.
|Updated 19 February 2020 by Mike Meredith|