October 24, 2019
October 24, 2019
If you’re anything like me, you don’t have to step outside your front door to see what an impact artificial intelligence (AI) is having on our lives. My virtual assistant helps me to wake up at the right time, informs me what weather I can expect, and schedules those all-important anniversary reminders. And once I’m on the road, my satellite navigation system finds me the quickest route while news updates stream to my phone based on my preference history.
But what exactly is AI and is the current hype surrounding it valid? In a new technology brief from NACD and Accenture Security, we look at the nuts and bolts of AI, where it comes from, and how it works. Here are some of the report’s ideas on the opportunities and risks of AI, and how organizations can take their first steps toward responsibly employing it.
AI is far from a new idea—but it does offer new opportunities. AI is likely to become a new driver of economic value for organizations, but businesses may find it difficult to leverage this technology without first understanding the opportunities it presents. To set a clearer path forward, corporate leaders should consider doing the following:
AI benefits don’t come risk-free. Organizations should get started on their AI journeys with a clear-eyed view of the likely risks. AI-associated cyber risks fall into two broad categories: data integrity and algorithm manipulation. The learning and decision-making capabilities of AI can be altered by threat actors modifying the data used in the training process. The algorithms themselves should also be protected from manipulations by threat actors hoping to change the outcomes of AI systems for malevolent purposes. Breaches can also take the form of “poisoning attacks,” where the machine learning model itself is manipulated.
Four principal risks should be considered in the near-term:
Many of companies’ current investments in cybersecurity are dedicated to securing the infrastructure underpinning AI models. This includes patching vulnerabilities in software and systems, implementing robust access management to ensure employees only engage with the necessary information to do their jobs, and prioritizing the security of the firm’s most valuable data assets. The adoption of AI systems generally creates entirely new areas of infrastructure to secure the AI models themselves and requires better security practices to mitigate against these vulnerabilities.
Here are some suggestions around meeting the many challenges of secure AI governance:
This article only scratches the surface of a broad topic that is going to have an even greater impact on our individual lives in the future. We know that data integrity is a fundamental requirement to help secure AI from malevolent influence, and we also know that AI raises ethical challenges as people adjust to the larger and more prominent role of automated decision making in society. Going forward, our report concludes that the emphasis needs to be on engineering resilient modeling structures and strengthening critical models against cyberattack by malicious threat actors.
If you’d like to pressure-test your management’s preparedness to assess and mitigate the risks associated with AI, take a look at the board primer on artificial intelligence today. It may help to open the dialogue in your organization to some of the questions—and answers—that you need.
Bob Kress is a managing director, co-chief operating officer, and global quality and risk officer for Accenture Security.