Dynamic fit index cutoffs for categorical factor analysis with Likert-type, ordinal, or binary responses.

American Psychologist, Vol 78(9), Dec 2023, 1061-1075; doi:10.1037/amp0001213Scale validation is vital to psychological research because it ensures that scores from measurement scales represent the intended construct. Fit indices are commonly used to provide quantitative evidence that a proposed factor structure is plausible. However, there is a mismatch between guidelines for evaluating fit of the factor models and the data that most researchers have. Namely, fit guidelines are based on the simulations that assume item responses are collected on a continuous scale whereas most researchers collect discrete responses such as with a Likert-type scale. In this article, we show that common guidelines derived from assuming continuous responses (e.g., root-mean-square error of approximation0.95) do not generalize to factor models applied to discrete responses. Specifically, discrete responses provide less information than continuous responses, so less information about misfit is passed to fit indices. Traditional guidelines, therefore, end up being too lenient and lose their ability to identify that a model may have a poor fit. We provide one possible solution by extending the recently developed dynamic fit index framework to accommodate discrete responses common in psychology. We conduct a simulation study to provide evidence that the proposed method consistently distinguishes between well-fitting and poorly fitting models. Results showed that our proposed cutoffs maintained at le...
Source: American Psychologist - Category: Psychiatry & Psychology Source Type: research