Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How to protect chatbot data and user privacy

Terena Bell | Sept. 27, 2017
Employees and customers often enter sensitive information during chatbot sessions, but you can minimize chatbot security and privacy risks.

Robot Artificial Intelligence chat bot
Credit: Thinkstock 

Are chatbots your next big data vulnerability? Yes, chatbots, those little add-ons to Slack and other messaging apps that answer basic HR questions, conduct company-wide polls, or get information from customers before connecting them to a person, pose a security risk. 

Because of the way we buy bots, Rob May, CEO of chatbot vendor Talla, says the IT industry is heading toward a data security crisis. "In the early days of SaaS [software as a service]," he explains, software "was sold as, 'Hey, marketing department, guess what? IT doesn't have to sign off, you just need a web browser,' and IT thought that was fine until one day your whole company was SaaS." Suddenly, critical operations were managed by platforms bought without any user or data management best practices in place. To head off similar data vulnerability from chatbots, May recommends streamlining bot purchasing and implementation now.

Unfortunately, employees might already be using chatbots to share salary information, health insurance details, and similar data. So what steps can IT can take now to keep that data safe? How do you stop this vulnerability before it starts? What other questions should you be asking?

 

Understand how chatbots will be used

Start by triaging the current situation, says Priya Dodwad, a developer at computer and network security provider Rapid7. Then, before you build or buy anything else, interview users. This helps in two ways: First, user responses show whether the chatbots you’re considering will be used as planned. This improves user adoption and productivity. Interviewing also helps assess threat level: You can better prepare for chatbot privacy concerns when you know the type of data you’re protecting.

When Rapid7 considers a new chatbot, Dodwad says, “We start off thinking, ‘Okay, what is the information that’s going to be with it? Is it going to be PII [personally identifiable information] data or data that’s confidential or revenue related?’ Those bots concern us the most.” Rapid7 runs bots that do something non-critical—like paste gifs into Slack chat—through a less stringent process.

The problem with this, though, is that sometimes people can chat about serious stuff while using a frivolous tool. Jim O’Neill, former CIO at Hubspot, says, “Learn that your humans will volunteer data.” Using a gif bot, for example, one employee might send another a funny get-well message. Next thing you know, they’re discussing the latter’s cancer diagnosis. “If you think about conversational interactions with bots, we’re naturally going to be giving up more information than we intend to,” he continues.

To do their jobs, chatbots need to ask questions. The data they get helps them assess the situation and to train. O’Neill says, “As the bots ask more—because they’re trying to be helpful and learn more—sensitive data will just naturally get in there.” For example, think about a bot that routes health insurance customers to the appropriate department for help. First, it asks for the customer’s claim number, but then the user types, “It’s 4562 and I need to know if STD tests are covered because I gotta do something about this rash.”

 

1  2  3  Next Page 

Sign up for Computerworld eNewsletters.