Amid Pandemic and Social Unrest, AI Risk Mitigation Is More Important Than Ever

By Ben Hoster

12/18/2020

Artificial Intelligence Risk Online Article

Artificial intelligence (AI) has truly embedded itself into the business landscape. No longer the purview of Big Tech companies alone, firms across various industries are actively integrating AI into their processes, acquiring tech start-ups, and scouting opportunities to deploy the technology in the near future. COVID-19 has only accelerated this trend as businesses have contended with plummeting revenue and workforce restrictions.

But as companies increasingly look to AI to solve business challenges and boost profitability, what risks will they face? How might they mitigate such risks? What can boards do to better oversee management’s role in AI use and risk mitigation?

With the substantial benefits that the technology promises, successful AI deployment is fundamentally a question of tradeoffs, especially for traditional, non-tech businesses. To limit severe financial and reputational harm, it is crucial that companies weigh the many benefits of AI usage against the risks intrinsic to its use, as well as associated concerns from the broader community. Consider, as one particularly pertinent example, the myriad ways wherein AI has been deployed in response to the global pandemic: from contact tracing to enhanced infection risk profiling, those who develop and deploy such cutting-edge techniques must carefully balance the dual imperatives of public health and individual liberties.

Of critical importance is the risk of algorithmic bias, which has been climbing the public agenda so quickly—more restrictive legislation will likely be coming to the European Union in 2021, for example—that directors urgently need to familiarize themselves with their companies’ potential liabilities and encourage senior leadership to take active steps to manage these risks. Given the self-learning and automated nature of AI technology, even simple algorithmic rules and inputs can produce wildly unpredictable outputs with possibly harmful consequences. Numerous controversies in recent years have shown us that AI systems can inadvertently generate biased and potentially discriminatory outputs when the dataset used to “teach” an algorithm is insufficiently expansive. This is exacerbated when historical data is used for training, thereby codifying and consolidating the systemic inequalities and discrimination that may subconsciously exist within societies and organizations. Big-name tech firms with dedicated AI specialists on hand have long struggled with this issue; non-tech companies are at even greater risk of intense public scrutiny and brand damage.

Another domain of risk requiring attention is that of public-facing “black box” AI models that make decisions on sensitive or consequential issues, such as credit-risk assessments and medical diagnoses—consider AI-enabled clinical decision support tools used to determine COVID-19 treatment. A lack of transparency and traceability of the decision-making process, particularly when using externally procured AI applications, exposes businesses to significant reputational harm. Organizations, especially when adverse outcomes to customers and staff are possible, must be able to explain and defend algorithm-based decision processes and their outputs to a range of stakeholders, including subject-matter experts and even the legal community in cases of alleged malpractice.

Cybercrime is also becoming an even more significant threat to all companies, especially with the rush toward digitalization and remote work during the pandemic. In fact, participants in the World Economic Forum’s annual 2020 Executive Opinion Survey of more than 12,000 business executives even after the onset of the pandemic rated cyberattacks as the top risk for doing business in the United States, the United Kingdom, and Canada—among other developed economies—over the next decade. The greater utilization of AI in critical business operations will only increase vulnerability to cybercrime as hackers can gain control of entire systems simply by manipulating their underlying algorithms. AI can moreover directly enhance the arsenal of cybercriminals who can now cause disproportionate levels of harm by leveraging faster speeds of decision-making enabled by automated programs. Smarter cyber threats, coupled with the business world’s growing reliance on digital capabilities, only escalate the risks to operations and revenue streams.

These risks in mind, and given the complex nature of the technology, a multifaceted and dynamic approach is required to govern and manage AI risks. This will only be possible with strong oversight at the board level.

Indeed, robust corporate oversight—potentially with the support of external and independent bodies—may be one of the most effective ways to mitigate AI risks. The activation of effective governance requires a rigorous series of steps and an independent oversight committee outside of the AI technology development team.

This committee must ensure technologies are developed and deployed in alignment with the organization’s values, overseeing that this still remains the case as algorithms learn and evolve. It should comprise senior representatives from key functions such as risk management, information technology, public affairs, legal, compliance, audit, and human resources to capture a range of perspectives. Decentralized and subsequent tiers of oversight may also be required depending on the range of technologies deployed and the context of their application; the committee should take these into account in determining the most appropriate AI governance approach to suggest for management’s and the board’s consideration. For its long-term success, it is important that the board endorses the committee’s level of ambition.

Boards should also ensure that management installs appropriate policies and levels of enforcement, along with other key risk management strategies, including the creation of a risk register and the development of suitable skills for those in management and audit positions to effectively handle the complex and dynamic risks that AI presents.

Beyond the procedural mechanisms for activating governance, however, it is essential that boards press management to evaluate their companies’ use of AI technology across five critical dimensions:

  • Intent: Use data in a principled manner and verify that AI design and implementation processes are ethically aligned and appropriate.

  • Fairness: Ensure that the processes and outputs of AI systems do not unwittingly discriminate against any group or individual.

  • Transparency: Verify that AI processes are explainable and repeatable.

  • Safety/Security: Establish robust capabilities in data governance, threat protection, and user privacy so as to better defend against malicious incursions.

  • Accountability: Undertake rigorous audit and compliance assurance processes to assuage the concerns of various stakeholders—lawmakers, auditors, customers, business partners, and shareholders, among others.

By framing AI management around these principles and instituting proper governance mechanisms, businesses can ensure that they do not expose themselves to undue risk—or worse, inadvertently cause harm to society at large. In so doing, companies and their boards will be able to rest easier when procuring, developing, and implementing new AI solutions.

Ben Hoster
Ben Hoster is a managing director at Marsh & McLennan Advantage and leads research on transformational technologies.