Semantic spaces are not created equal—How should we weigh them in the sequel? On composites in automated creativity scoring.

European Journal of Psychological Assessment, Vol 39(6), 2023, 449-461; doi:10.1027/1015-5759/a000723Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e.,> .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor score...
Source: European Journal of Psychological Assessment - Category: Psychiatry & Psychology Source Type: research