Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Chatbots: Opportunity and threat

Simran Gambhir, CEO, Ganemo Group | June 8, 2017
In the first six months after Facebook introduced a bot API for Messenger, around 34,000 chatbots were created.

Chatbots are on the rise. In the first six months after Facebook introduced a bot API for Messenger, around 34,000 chatbots were created. They're being used for a range of purposes, from engaging at the pre-sales point to customer support and troubleshooting.  

But in the light of the WannaCry global ransomware attacks, and new threats such as EternalRocks, chatbots may represent yet another opening for cyberattacks.  

There are two key ways a chatbot can be implemented, with many using a hybrid approach:  

  • human-powered with a live operator at the other end, usually supported by a script
  • automated responses based on hard coded rules, increasingly using AI

While it's currently more likely than not that a live human support agent is there, this is shifting. The goal is for a fully-automated but also human-like experience, which is where AI and Natural Language Processing come in. It's about moving from a retrieval-based model, using a using a repository of pre-defined responses, to a generative approach where new responses are created from scratch.  

People don't tend to react well to a bot, if they perceive it's just a machine, messaging from a script. This leads to a cold user experience, the exact opposite of customer engagement that most chatbot operators are trying to achieve.  

When chatbots get it right, they can be a powerful tool for winning trust, getting customers to communicate more personal information. The more a user trusts a bot, the more they may let their guard down. Combine this trust with the power and scale of automated bots, and you can see what an opportunity they could represent for criminals.  

 

Harvesting data

Cybercriminals could harvest information gathered through chatbots for targeted spear phishing attacks. This is already possible if a browser or network is compromised, but the information sent through messaging may be an even richer vein of data. People tend to become more comfortable in a real-time conversation. With a service such as Facebook, any data sent through Messenger is linked to someone's real life profile and is very valuable to hackers.  

 

Infiltrating chatbots

As well as hacking chatbots and stealing data, cybercriminals could actively take them over and impersonate the official providers. This would give them unprecedented ability to get users to hand over passwords and financial information. Criminals in the guise of "tech support" at an ISP or bank already use voice calls to trick victims. Chatbots represent another channel for this, and an easier one for criminals to automate.  

 

Rogue chatbots

Cybercriminals could create new fake chatbots and embed them onto compromised sites or within pop-up ads, such that they appear to be from a legitimate site. With email, people have learnt to recognise signs and check the sender to verify whether they're genuine or not. Chatbots are newer, so there's a higher user risk as people aren't yet seasoned and have only limited awareness of the dangers.  

 

1  2  Next Page 

Sign up for Computerworld eNewsletters.