Schuermans, M., Markovsky, I., Wentzell, P. and Van Huffel, S.
(2005)
On the equivalence between Total Least Squares and Maximum Likelihood PCA
Analytica Chimica Acta, 544, (12), .
Description/Abstract
The maximum likelihood PCA (MLPCA) method has been devised in chemometrics as a generalization of the wellknown PCA method in order to derive consistent estimators in the presence of errors with known error distribution. For similar reasons, the total least squares (TLS) method has been generalized in the field of computational mathematics and engineering to maintain consistency of the parameter estimates in linear models with measurement errors of known distribution. The basic motivation for TLS is the following. Let a set of multidimensional data points (vectors) be given. How can one obtain a linear model that explains these data? The idea is to modify all data points in such a way that some norm of the modification is minimized subject to the constraint that the modified vectors satisfy a linear relation. Although the name “total least squares” appeared in the literature only 25 years ago, this method of fitting is certainly not new and has a long history in the statistical literature, where the method is known as “orthogonal regression”, “errorsinvariables regression” or “measurement error modeling”. The purpose of this paper is to explore the tight equivalences between MLPCA and elementwise weighted TLS (EWTLS). Despite their seemingly different problem formulation, it is shown that both methods can be reduced to the same mathematical kernel problem, i.e. finding the closest (in a certain sense) weighted low rank matrix approximation where the weight is derived from the distribution of the errors in the data. Different solution approaches, as used in MLPCA and EWTLS, are discussed. In particular, we will discuss the weighted low rank approximation (WLRA), the MLPCA, the EWTLS and the generalized TLS (GTLS) problems. These four approaches tackle an equivalent weighted low rank approximation problem, but different algorithms are used to come up with the best approximation matrix. We will compare their computation times on chemical data and discuss their convergence behavior.
Actions (login required)

View Item 