March 25, 2020
March 25, 2020
Are boards prepared to help C-suites identify the sweet spot between assertiveness and patience when weighing the risks of deploying a new technology? This was the question posed by Bob Kress, cochief operating officer and global lead for quality and risk at Accenture Security, to directors attending a recent roundtable discussion in New York hosted by NACD in partnership with the management consulting firm. The discussion with invited directors was facilitated by Kress and Vikram Desai (pictured above), managing director and products lead for Accenture Security. The conversation explored the board’s role in technology risk oversight related to artificial intelligence (AI) and machine learning.
The question of where and how technology oversight is allocated by the board was a key discussion point. Some of the directors came down on the side of having certain committees take on this responsibility. “I do believe that audit committees understand risk, and particularly in financial services, they’ve been doing this for a long time,” one attendee observed. “They understand cybersecurity is very similar in the sense that you’re trying to analyze what the risks to the business are, and then put metrics in place that are reported quarterly or monthly on a dashboard to help the audit committee track progress. The problem is that a lot of audit committees can’t speak [the language of] technology and therefore it’s hard to translate that, but I think good boards create a technology team within the audit committee.”
One director countered that assertion: “There’s a good reason not to have a technology committee, because then it’s really easy for the board to say, ‘We delegate this. This is a technology committee issue. Great. We’re done.’ I’ve been involved with a couple of boards who have said risk is a board function and in their board charter they keep risk at the full-board level and force those issues into business terms.” In other words, a board that is able to translate the impact of potential cyber risk into actual impact on the business adds greater value to the company overall in part because it forces all directors to engage on the issue.
There are practical applications of machine learning and AI that require the board’s understanding. One attendee emphasized that in his experience the most important aspect of integrating AI into business operations is to establish and build trust in these systems. Inputs used to program AI systems are susceptible to human biases, which could affect outputs.
“If you have an AI engine that’s part of a mortgage-processing application or in the health-care field helping physicians diagnose patients, you better be able to explain the results,” said this director. “Who in the organization can explain the output?”
In addition to making sure that the board comprehends the business risk of AI, technology oversight requires affirmation of the security around any and all AI implementations within their companies. Attendees suggested the following questions to ask of themselves and the company’s technology chiefs:
Risk mitigation is clearly a critical component of any boardroom conversation about the implementation of new technology. Here, Desai recommended that boards outline the potential risks that are associated with the AI initiatives the company is trying to implement. “And come armed with solutions as to where you can mitigate those risks—even if the answer is that a particular risk is acceptable.”
Boards need to weigh the benefits of new technologies against the potential risks. Kress deemed oversight of this “sweet spot” as an “obligation” of the board. “I think boards can play, and should play, a critical role in understanding where the organization is at on that spectrum, and what’s acceptable for the organization.” said Kress, “Because if you go too far to either end of the spectrum, it’s a problem.”
NACD: Tools and resources to help guide you in unpredictable times.