CONVERSING WITH BOTS

Security Advisor Middle East|January 2020

CONVERSING WITH BOTS
WHAT ARE THE SECURITY IMPLICATIONS IN A WORLD WHERE PEOPLE DO NOT KNOW THAT THEY ARE INTERACTING WITH A COMPUTER OR A PERSON? SECURITY CORRESPONDENT DANIEL BARDSLEY INVESTIGATES.

Many of us have answered the telephone only to find that the person on the other line is not a real person at all.

The voice on the other end of the line may sound very similar to a human being, but the stilted delivery and lack of small talk give the game away. We are talking to a bot.

Law firms searching for people who have suffered accidents sometimes use bots when attempting to drum up new clients, as do companies attempting to convince individuals that their home computers have been hacked.

A fascinating new study undertaken by UAE researchers highlights the way in which we often do not react well when dealing with bots.

Published in Nature Machine Intelligence, the study involved testing how people behave when playing the “prisoner’s dilemma” game with a bot or a person.

A favourite test bed for game theory researchers, prisoner’s dilemma can be carried out by over multiple rounds by two individuals who have to choose each time whether to cooperate or betray the other.

If both cooperate, each gets a modest pay-off. When both choose betrayal, each suffers. So it might seem that both should always cooperate.

But there is a complication: if one cooperates and the other betrays, the player who carries out the betrayal receives a larger pay-off than if both cooperate.

Over repeated games, the best strategy for both is to cooperate, but the temptation is always there to betray the other and receive a higher pay-off for that round.

As described in the paper, which is entitled “Behavioural Evidence for a Transparency-Efficiency Trade-off in Human-Machine Cooperation”, hundreds of volunteers took part in the study online.

About a quarter of the volunteers played against a bot and were told that this was the case, while a further quarter played against a bot but thought they were playing against a human.

Another quarter played against a human and were informed of this, while the other quarter played against a human but had been told that the other player was a bot.

Overall, the bot got better results than the human player, which is perhaps not surprising given the emotional detachment and improved powers of analysis of bots.

But, in a crucial result, the bots performed worse when the people that they were playing against knew that they were up against a bot.

articleRead

You can read upto 3 premium stories before you subscribe to Magzter GOLD

Log-in, if you are already a subscriber

GoldLogo

Get unlimited access to thousands of curated premium stories and 5,000+ magazines

READ THE ENTIRE ISSUE

January 2020