Looking represents choosing in toddlers: Exploring the equivalence between multimodal measures in forced ‐choice tasks

This study aimed to answer this question by investigating how accurately pointing responses (i.e., left or right) could be predicted from concurrent preferential looking. Using pre‐existing videos of toddlers aged 18–23 months engaged in an intermodal word comprehension task, we developed models predicting man ual from looking responses. Results showed substantial prediction accuracy for both the Simple Majority Vote and Machine Learning‐Based classifiers, which indicates that looking responses would be reasonable alternative measures of manual ones. However, the further exploratory analysis revealed th at when applying the created models for data of toddlers who did not produce clear pointing responses, the estimation agreement of missing pointing between the models and the human coders slightly dropped. This indicates that looking responses without pointing were qualitatively different from those with pointing. Bridging two measurements in forced‐choice tasks would help researchers avoid wasting collected data due to the absence of manual responses and interpret results from different modalities comprehensively.
Source: Infancy - Category: Child Development Authors: Tags: RESEARCH ARTICLE Source Type: research