AI technologies open the doors to amazing new enterprise functionality. To operate, AI depends on data – lots of it. Unfortunately, hackers would love to get their hands on all this data as well. Here’s some AI Security tips to keep your data safe from the bad guys.
[Note: The topics and subjects covered in this article will be discussed in-depth during IPsoft’s second annual Digital Workforce Summit on June 7. See registration link below.]
Modern AI security means keeping customer data safe from adversaries
Artificial intelligence (AI) is quickly becoming an integral part of the modern business landscape. For every user-facing role taken on by digital colleagues like IPsoft’s Amelia, a small army of less-personified algorithms is working behind the scenes to intelligently simplify business processes – 80% of global enterprises are using AI technologies according to a report from research firm Vanson Bourne. Regardless if it’s on the front- or back-end of your business, there is one thing AI requires to learn and function: data. Tons of it. This leads to an important issue that has taken on increased urgency: How to keep all that AI-fueling data secure?
The recent scandal involving misappropriated Facebook data (along with recent stories regarding massive data hacks) has heightened the public’s sensitivity to how their data is being handled, and prompted enterprises to make data privacy a priority in the face of potential new regulations.
AI will continue to develop as a powerful business tool, however all its prowess and potential will be for naught if customers don’t trust your systems with their data.
AI will continue to develop as a powerful business tool, however all its prowess and potential will be for naught if customers don’t trust your systems with their data. The good news is there are some basic steps you can take to mitigate these concerns.
Keep it local
How and where to deploy AI depends on a company’s size, goals and needs. However, AI deployed on-premises is, in a sense, more secure than deploying in the cloud. The fewer trips your data needs to make outside of your secure environment, means the fewer chances there are for it to fall into the wrong hands while in transit (or otherwise outside of your direct control).
Cloud-based AI solutions aren’t doomed to fail, by any means, but companies must take additional steps to ensure data security such as secure enclaves, where access to data is tightly regulated.
Keep it cryptic
"AI requires a ton of data, so the privacy implications are bigger," says Andras Cser, vice president and principal analyst at Forrester Research as quoted by TechTarget. "There's potential for a lot more personally identifiable data being collected. IT definitely needs to pay attention to masking that data." Fortunately, there are a number of ways AI can utilize all the data it needs without diminishing privacy concerns.
One of the most direct ways to secure data is to encrypt it, but contemporary encryption solutions make this a . “Differential privacy” is an alternative solution gaining in popularity where some noise or randomness is added to large databases, which allows algorithms to efficiently tap relevant data, while making it difficult to identify any individual or specific source within that data. Even if large amounts of data fell into the wrong hands, differential privacy adds a layer of protection for your customers.
Keep on the lookout for new threats
As AI is a relatively new tool for business, there are new data threats we haven’t even considered on the horizon. For example, security researchers have devised new methods of “adversarial learning” where the machine learning “supply chain” is intentionally altered to sabotage the results. In fact, we’ve already seen at least one highly public adversarial learning “attack” in action: Two years ago, trolls intentionally tricked Microsoft’s social media AI “Tay” into returning crude and offensive remarks. Tay was a mostly benign (if embarrassing) episode, however the incident illustrates the potential for bad players to cause real trouble in the AI space. Imagine the chaos and destruction that might be unleashed, if say, a self-driving car was somehow purposefully “tricked” to not recognize stop signs.
Adversarial learning is just one example of an ascendant AI threat that wasn’t on most security radars until recently. It is incumbent upon today’s business decision-makers to keep up-to-date on the latest threats, and keep an open mind to threats that you haven’t yet encountered.
Keep it transparent
All transactions involving customer data should come with a clear and detailed record, whether the data is accessed by machine or a human. Maintaining detailed records will deter bad human actors, and will allow processes involving machines to be readily audited (and should it be necessary, altered) moving forward.
Keep your team up to date
Perhaps the most important step to data security is internal practices and procedures. This includes properly training your human staff — just as you would train your AI systems -- to follow up-to-date data protocols and teaching them to recognize the latest data threats. Team members should only be granted access to data as needed to perform their job function. Cloistering privileges is necessary to prevent any widespread misuse from a bad player.
Keep your confidence high
Thanks to new AI technologies, the next decade promises to offer as much change to the way business is handled as the century that proceeded it. By staying nimble and focused with evolving security needs, companies that embrace AI can hurdle security challenges while still capitalizing on new opportunities.