Artificial Intelligence: The New Frontier for Board Oversight?

By Sarah Eichenberger, Jonathan Rotenberg, and Caroline Sabatier

12/20/2022

Artificial Intelligence Strategy Online Article

Artificial intelligence (AI) is a vital part of transacting in the global economy. Whether it is used to automate manual processes, to bolster cybersecurity defenses, or to power complex search algorithms, AI has become a necessity for many corporations.

Although it can provide competitive advantages, AI may also pose regulatory and reputational risks. Not surprisingly, over the last few years, courts, legislatures, and government agencies have focused on these risks.

For example, in a set of highly publicized hearings, the US Congress examined whether search algorithms used by certain technology companies operate with learning biases. More recently, the US Securities and Exchange Commission took enforcement action against an asset manager for, among other things, its use of algorithmic trading software. Other companies are facing mounting scrutiny over their use of biometric data in machine learning.

As AI evolves, so do the legal questions it raises. Directors of companies at which AI is a meaningful part of the business model face a complex dilemma: How can they ensure appropriate board oversight over technology that is designed to run autonomously? Some foreign regulators, including the Monetary Authority of Singapore, the UK Financial Conduct Authority, and the Hong Kong Money Authority, have expressed the view that directors are obligated to oversee AI-related risks. While in the United States regulators have largely remained silent about the scope of the board’s role with respect to AI, state and federal governments have signaled an interest in regulating the use of AI technology. For example, New York City lawmakers have enacted legislation restricting the use of automated employment decision tools. At the federal level, the Federal Trade Commission announced an advance notice of proposed rulemaking earlier this year that, among other things, solicits input on regulating algorithmic decision-making. More recently, the White House issued the “Blueprint for an AI Bill of Rights,” recommending that private sector companies adopt AI risk identification and oversight systems.

Even as the regulatory landscape remains in flux, boards of companies where AI is a substantial part of the business model may wish to consider how AI impacts their common law fiduciary obligations. Delaware’s Caremark duty of oversight in particular requires that directors institute and monitor systems to detect and remediate potential risks to the company. Although legal claims involving alleged Caremark violations are notoriously difficult for plaintiffs to litigate, recent Delaware Court of Chancery decisions emphasize that to survive Caremark scrutiny, boards must actively oversee “mission-critical” risks. But few decisions discuss how AI impacts board oversight. Those that do provide limited guidance.

One recent Delaware decision involves SolarWinds Corp., a software provider. Stockholders sought to hold SolarWinds’ board liable for alleged cybersecurity weaknesses that precipitated a cyberattack on its customers. In dismissing the case, the Court of Chancery characterized cybersecurity as a “business risk” protected by the business judgment rule. According to the court, an alleged failure to oversee ordinary “business risks” only becomes an actionable Caremark claim if the failure violates positive law. The court also suggested that the board had not breached any duty because it had defined cybersecurity oversight mechanisms.

Precisely what Caremark requires when AI-powered technology presents more than simply a “business risk” remains an open question. A 2021 ruling involving The Boeing Co. provides at least a partial answer. There, Boeing’s stockholders filed a derivative suit on behalf of the company, alleging that the board’s failure to oversee the safety of Boeing 737 MAX software contributed to two plane crashes.

In denying defendants’ motion to dismiss, the Delaware Court of Chancery opined that although the board had an audit committee for general risk oversight, it did not have defined board reporting systems to specifically address mission-critical aircraft safety.         

While it is difficult to predict how Caremark will continue to apply to AI oversight, existing case law suggests that generalized risk oversight mechanisms and reliance on ad hoc management reporting may not withstand Caremark scrutiny. Boards wishing to bolster their management of mission-critical AI risks may therefore consider doing the following:

  • Understand how AI is used in the company and the existing oversight mechanisms.

  • Ensure that the individual(s) overseeing AI have the appropriate skill set and resources.

  • Establish, in conjunction with management, internal controls for any mission-critical AI risks.

  • Institute dedicated reporting and board oversight mechanisms for any mission-critical AI risks.

  • For companies in which AI is a meaningful part of the business model, seek a board member who has familiarity with AI or, alternatively, engage independent advisor(s) to supplement the board’s skill set.

AI is undoubtedly a new oversight frontier for many boards. But as AI continues to drive business decisions, it may be time for directors to evaluate its implications on their fiduciary obligations.

Sarah Eichenberger is a securities litigation partner at Katten. 

Jonathan Rotenberg is a partner at Katten Muchin Rosenman. 

Caroline Sabatier is a securities litigation associate at Katten.