Topics:   Risk Management,Strategy,Technology

Topics:   Risk Management,Strategy,Technology

October 24, 2019

The How, Why, and What of Artificial Intelligence

October 24, 2019

If you’re anything like me, you don’t have to step outside your front door to see what an impact artificial intelligence (AI) is having on our lives. My virtual assistant helps me to wake up at the right time, informs me what weather I can expect, and schedules those all-important anniversary reminders. And once I’m on the road, my satellite navigation system finds me the quickest route while news updates stream to my phone based on my preference history.

But what exactly is AI and is the current hype surrounding it valid? In a new technology brief from NACD and Accenture Security, we look at the nuts and bolts of AI, where it comes from, and how it works. Here are some of the report’s ideas on the opportunities and risks of AI, and how organizations can take their first steps toward responsibly employing it.

AI is far from a new idea—but it does offer new opportunities. AI is likely to become a new driver of economic value for organizations, but businesses may find it difficult to leverage this technology without first understanding the opportunities it presents. To set a clearer path forward, corporate leaders should consider doing the following:

  • Review and, where appropriate, introduce automation into business processes,
  • Assess how AI can augment employees’ current work, and
  • Avoid concentrating or limiting this technology; instead, diffuse it throughout business units or functions.

AI benefits don’t come risk-free. Organizations should get started on their AI journeys with a clear-eyed view of the likely risks. AI-associated cyber risks fall into two broad categories: data integrity and algorithm manipulation. The learning and decision-making capabilities of AI can be altered by threat actors modifying the data used in the training process. The algorithms themselves should also be protected from manipulations by threat actors hoping to change the outcomes of AI systems for malevolent purposes. Breaches can also take the form of “poisoning attacks,” where the machine learning model itself is manipulated.

Four principal risks should be considered in the near-term:

  • Trust and transparency: Complex forms of AI often operate in ways that can make it hard to explain how they arrived at the results produced. New approaches are needed to offer better explanations of the processes underlying AI decisions. Decisions taken by AI must be open to interrogation or appeal.
  • Liability: Executive leaders and the board should carefully monitor changes in legislative and regulatory requirements to ensure compliance.
  • Control: Careful thought is needed on when and how control is or should be shared or transferred between humans and AI.
  • Security: As the growth of AI into all sectors increases, security becomes paramount and is compounded by the current lack of protection to both AI models and the data used to train them. Boards should ensure they are asking the right questions of management and outside advisors to secure their burgeoning AI tools.

Securing AI

Many of companies’ current investments in cybersecurity are dedicated to securing the infrastructure underpinning AI models. This includes patching vulnerabilities in software and systems, implementing robust access management to ensure employees only engage with the necessary information to do their jobs, and prioritizing the security of the firm’s most valuable data assets. The adoption of AI systems generally creates entirely new areas of infrastructure to secure the AI models themselves and requires better security practices to mitigate against these vulnerabilities.

Here are some suggestions around meeting the many challenges of secure AI governance:

  • Limit the AI learning rate. Limiting the volume of data to be ingested in an AI system over a set period can act as a major deterrent to hackers, since the learning process will take longer and malevolent data may be spotted more easily.
  • Validate and protect AI input. In assessing data integrity practices, both around protection and validation, companies should carefully focus on inputs into AI models and confirm that these originate from identifiable and trusted sources.
  • Restrict access to AI models. Restricting access to AI models by limiting certain employees’ ability to make ad hoc changes is one of the most effective forms of defense.
  • Train AI to recognize attacks. If enough malicious examples are inserted into data during the training phase, a machine learning algorithm can eventually understand how to interpret toxic data and reject adversarial attacks. Business continuity and disaster recovery are also vital practices. Organizations should understand how to relearn and recover after a cyber attack without negatively impacting the business.

This article only scratches the surface of a broad topic that is going to have an even greater impact on our individual lives in the future. We know that data integrity is a fundamental requirement to help secure AI from malevolent influence, and we also know that AI raises ethical challenges as people adjust to the larger and more prominent role of automated decision making in society. Going forward, our report concludes that the emphasis needs to be on engineering resilient modeling structures and strengthening critical models against cyberattack by malicious threat actors. 

If you’d like to pressure-test your management’s preparedness to assess and mitigate the risks associated with AI, take a look at the board primer on artificial intelligence today. It may help to open the dialogue in your organization to some of the questions—and answers—that you need.

Bob Kress is a managing director, co-chief operating officer, and global quality and risk officer for Accenture Security.

Comments