A fixed factor analysis procedure is considered in which a data matrix is modeled as the sum of  a common factor score matrix post-multiplied by a transposed loading matrix,  a unique factor score matrix post-multiplied by a diagonal matrix including the square roots of uniqueness, and  a matrix containing random errors, with the matrices in  and  treated as unknown fixed parameter matrices. This procedure can be viewed as an extension of principal component analysis (PCA), in that PCA can be modeled as the above model without . I propose an iterative least squares algorithm for jointly obtaining the common and unique factor scores, loadings, and uniqueness in  and . This algorithm is efficient in that factor scores may be obtained after iteration without a necessity of updating them in each iterative step. I further show that the lease squares solution is also the maximum likelihood solution under the assumption that the errors in  are distributed identically according to the standard normal distribution and discuss the possibility that factor analysis can be viewed as extended rank approximation of a data matrix in contrast to PCA as reduced rank approximation.
Keywords: Factor Analysis; principal component analysis; expanded rank approximation; least squares algorithm
Biography: Born in Osaka, Japan, 1958
Graduated from Kyoto University, Japan, 1982
Received Ph.D. from Kyoto University, Japan, 1998
Chief Editor, Behaviormetrika, since 2009