On convergence of the EM algorithm and the Gibbs sampler


Sahu, Sujit K. and Roberts, Gareth O. (1999) On convergence of the EM algorithm and the Gibbs sampler. Statistics and Computing, 9, (1), 55-64. (doi:10.1023/A:1008814227332).

Download

Full text not available from this repository.

Original Publication URL: http://dx.doi.org/10.1023/A:1008814227332

Description/Abstract

In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.

Item Type: Article
ISSNs: 0960-3174 (print)
Related URLs:
Keywords: gaussian distribution, generalized linear mixed models, markov chain monte carlo, parameterization, rate of convergence
Subjects: Q Science > QA Mathematics
H Social Sciences > HA Statistics
Divisions: University Structure - Pre August 2011 > School of Mathematics > Statistics
ePrint ID: 30031
Date Deposited: 11 May 2007
Last Modified: 27 Mar 2014 18:18
URI: http://eprints.soton.ac.uk/id/eprint/30031

Actions (login required)

View Item View Item