On convergence of the EM algorithm and the Gibbs sampler
On convergence of the EM algorithm and the Gibbs sampler
In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.
gaussian distribution, generalized linear mixed models, markov chain monte carlo, parameterization, rate of convergence
55-64
Sahu, Sujit K.
e1809a9c-21ec-409a-884b-8e5f9041d4e4
Roberts, Gareth O.
f799af60-e7bf-4c2c-8d43-9be31c3b5d68
1999
Sahu, Sujit K.
e1809a9c-21ec-409a-884b-8e5f9041d4e4
Roberts, Gareth O.
f799af60-e7bf-4c2c-8d43-9be31c3b5d68
Sahu, Sujit K. and Roberts, Gareth O.
(1999)
On convergence of the EM algorithm and the Gibbs sampler.
Statistics and Computing, 9 (1), .
(doi:10.1023/A:1008814227332).
Abstract
In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.
This record has no associated files available for download.
More information
Published date: 1999
Keywords:
gaussian distribution, generalized linear mixed models, markov chain monte carlo, parameterization, rate of convergence
Organisations:
Statistics
Identifiers
Local EPrints ID: 30031
URI: http://eprints.soton.ac.uk/id/eprint/30031
ISSN: 0960-3174
PURE UUID: 372ca272-8c95-440c-b723-8c79ef5b5ee4
Catalogue record
Date deposited: 11 May 2007
Last modified: 15 Mar 2024 07:36
Export record
Altmetrics
Contributors
Author:
Sujit K. Sahu
Author:
Gareth O. Roberts
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics