When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention.

When scenes speak louder than words: Verbal encoding does not mediate the relationship between scene meaning and visual attention. Mem Cognit. 2020 May 19;: Authors: Rehrig G, Hayes TR, Henderson JM, Ferreira F Abstract The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in attentional guidance is modulated by verbal encoding, given that we often use language to process information. In two experiments, 60 subjects studied scenes (N1 = 30 and N2 = 60) for 12 s each in preparation for a scene-recognition task. Half of the time, subjects engaged in a secondary articulatory suppression task concurrent with scene viewing. Meaning and saliency maps were quantified for each of the experimental scenes. In both experiments, we found that meaning explained more of the variance in visual attention than image salience did, particularly when we controlled for the overlap between meaning and salience, with and without the suppression task. Based on these results, verbal encoding processes do not appear to modulate the relationship between scene meaning and visual attention. Our findings suggest that semantic information in the scene steers the attentional ship, c...
Source: Memory and Cognition - Category: Neuroscience Tags: Mem Cognit Source Type: research
More News: Neuroscience | Study