We ’re More Willing To Use Deceptive Tactics When A Bot Does The Negotiating

In this study, each participant in fact went up against a bot, rather than a human player.  The bots had one of four different negotiating profiles: nice plus competitive, nice plus consensus-building, nasty plus competitive and nasty plus consensus-building. None, however, used any deceptive tactics. At the end of the negotiation, the participants filled out the ANTI again, to indicate how they would behave next time. The team found that whether the agent was nice or nasty didn’t change the ANTI ratings. However, while interacting with a competitive “tough” agent increased endorsement of deceptive tactics, interacting with a “fair” consensus-building agent reduced intentions to be deceptive. Though the agents had not used deception, even brief experience with a hard-ball negotiator bot made the participants more willing to be underhand. As the researchers write, even if participants are initially keen for their representative to negotiate fairly, “exposure to the real world of aggressive, tough negotiators is enough make them forsake their qualms and embrace deception”. When designing future agents to accurately represent us in real-world negotiations, these findings should, then, be taken into account, the researchers say — the work suggests that bots should be programmed to become more deceptive in response to a tough negotiation. As the team writes, “Shying away from this during agent design could lead to unsatisfied users and ...
Source: BPS RESEARCH DIGEST - Category: Psychiatry & Psychology Authors: Tags: Lying Social Technology Source Type: blogs