In linear mixed models and structural equation models, latent variables (or random effects) are often predicted using the Empirical Bayes Method (EBM), i.e., by calculating the expected value of the latent variable given the observed data. Alternative methods include the so-called maximum likelihood (ML) predictor (Bartlett method) where the likelihood function is maximized with respect to the latent variables after fixing unknown parameters at their estimated values. It is shortly explained why these methods have different properties when used in subsequent regression analysis. EBM leads to consistent estimates of regression coefficients when predictions are used as the dependent variable, but is biased when predictions used as a covariate. The opposite is true for ML. It is shown how the ML method can generalized to latent class models and that the resulting method still leads to consistent estimation of regression coefficients when predictions are used as a dependent variable. Finally, it is illustrated that the step-wise approach of first estimating latent variables and then doing regression analysis is more robust to model miss-specifications than maximum likelihood estimation in a full structural equation model.
Keywords: Latent variable model; Prediction; Latent class model; Step-wise analysis
Biography: Esben Budtz-Jørgensen is an associate professor in the department of biostatistics at the university of copenhagen. his main research area is latent variable models.