Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How to protect chatbot data and user privacy

Terena Bell | Sept. 27, 2017
Employees and customers often enter sensitive information during chatbot sessions, but you can minimize chatbot security and privacy risks.

 

Who else sees chatbot information?

Not only does IT need to prepare for unexpected data to be entered in the system, but CSOs should ask who’ll see this information as well. When considering a new vendor, May recommends asking where the data will inevitably go. Is it stored locally or in the cloud? To whom is it routed? How does the bot get trained?

As with most machine learning, real people often check an enterprise chatbot’s work to improve the engine. If human review is part of your vendor’s process, May says to ask, “Who sees the data? Does it go out on [Amazon’s] Mechanical Turk? Does it go out on a crowd file? Do you care?”

“There’s a tradeoff,” he continues. “Sometimes [the chatbot] might be the only way to get done what you need done and so you have to deal with that. You have to decide: Can your data go out there? Where does it go and how do these things train?”

One solution, May adds, would be to implement a service level agreement (SLA) addressing chatbot risks. In addition to including uptime requirements, quality expectations, and other matters you’d typically find in an SLA, make sure your agreement addresses chatbot encryption and similar security expectations: What external providers—like Turk—does the vendor work with? Will they maintain SSAE-16/SSAE-18 certification or SOC 2 compliance for the length of the contract? What happens if they don’t?

 

Start with a chatbot proof of concept

To mitigate risk, Dodwad says most external chatbots at Rapid7 start off as a proof of concept (POC). Only after a successful POC are they more broadly deployed. She says the POC is also a chance to reassess need: “It’s important to see what’s the coverage of that bot: Is it going to reach all the employees or is it just for a particular department? Things like that influence how we plan the deployment and the training around it.” Despite being a technology company, Dodwad says many Rapid7 employees “are not very technical, so we need to make sure [the bot] is very intuitive.”

If the bot asks, 'Please tell me your claim number and your claim number only,' fewer users will talk about their rash.

The more intuitive, the better—not just so the chatbot can provide the solution it was bought for, but also so users won’t enter private, unnecessary data. Going back to our health insurance example, if users are providing too much data, make the chatbot easier to use. If the bot asks, “Please tell me your claim number and your claim number only,” fewer users will talk about their rash.

For employee-facing chatbots, user training teaches staff what level of information is and is not appropriate to share. Employee training also lowers the risk of rogue implementation—like the kind companies saw in SaaS’s early days. If employees understand why chatbot privacy concerns are important, they’ll be more likely to run new bots by IT before installation.

 

Previous Page  1  2  3  Next Page 

Sign up for Computerworld eNewsletters.