A fixed factor analysis procedure is considered in which a data matrix is modeled as the sum of [1] a common factor score matrix post-multiplied by a transposed loading matrix, [2] a unique factor score matrix post-multiplied by a diagonal matrix including the square roots of uniqueness, and [3] a matrix containing random errors, with the matrices in [1] and [2] treated as unknown fixed parameter matrices. This procedure can be viewed as an extension of principal component analysis (PCA), in that PCA can be modeled as the above model without [2]. I propose an iterative least squares algorithm for jointly obtaining the common and unique factor scores, loadings, and uniqueness in [1] and [2]. This algorithm is efficient in that factor scores may be obtained after iteration without a necessity of updating them in each iterative step. I further show that the lease squares solution is also the maximum likelihood solution under the assumption that the errors in [3] are distributed identically according to the standard normal distribution and discuss the possibility that factor analysis can be viewed as extended rank approximation of a data matrix in contrast to PCA as reduced rank approximation.
Keywords: Factor Analysis; principal component analysis; expanded rank approximation; least squares algorithm
Biography: Born in Osaka, Japan, 1958
Graduated from Kyoto University, Japan, 1982
Received Ph.D. from Kyoto University, Japan, 1998
Chief Editor, Behaviormetrika, since 2009