Unavoidable social contagion of false memory from robots to humans.

American Psychologist, Vol 79(2), Feb-Mar 2024, 285-298; doi:10.1037/amp0001230Many of us interact with voice- or text-based conversational agents daily, but these conversational agents may unintentionally retrieve misinformation from human knowledge databases, confabulate responses on their own, or purposefully spread disinformation for political purposes. Does such misinformation or disinformation become part of our memory to further misguide our decisions? If so, can we prevent humans from suffering such social contagion of false memory? Using a social contagion of memory paradigm, here, we precisely controlled a social robot as an example of these emerging conversational agents. In a series of two experiments (ΣN = 120), the social robot occasionally misinformed participants prior to a recognition memory task. We found that the robot was as powerful as humans at influencing others. Despite the supplied misinformation being emotion- and value-neutral and hence not intrinsically contagious and memorable, 77% of the socially misinformed words became the participants’ false memory. To mitigate such social contagion of false memory, the robot also forewarned the participants about its reservation toward the misinformation. However, one-time forewarnings failed to reduce false memory contagion. Even relatively frequent, item-specific forewarnings could not prevent warned items from becoming false memory, although such forewarnings helped increase the participants’ overall ...
Source: American Psychologist - Category: Psychiatry & Psychology Source Type: research