Topics:   Risk Management,Strategy,Technology

Topics:   Risk Management,Strategy,Technology

October 3, 2019

A Human-Centric Approach Is Key to the Future of AI

October 3, 2019

Artificial intelligence (AI) should work for humans, not the other way around. This is the opinion of Paul R. Daugherty, chief technology and innovation officer at Accenture and chair of consulting firm Avenade. Daugherty’s insights, shared during his interview with Nora M. Denzel, an experienced director and executive who currently sits on the boards of Advanced Micro Devices, Ericsson, and Talend, closed out the final day of the 2019 NACD Global Board Leaders’ Summit.

Daugherty believes that what we expect from AI is exactly what we’ll get. If companies try to adopt this new technology without considering how humans might fit into the equation, then the AI revolution will indeed eliminate jobs—and be short-lived. To ensure longer-lasting benefits for businesses and society at large, Daugherty offered a few human-centric considerations for boards preparing to incorporate AI into their company’s business processes:

  • Humans are wildly underestimated. Just as mainframe computing didn’t ruin or take over humanity, AI won’t either. In a Harvard University medical study, AI was able to detect cancer with 92 percent accuracy while a doctor was able to detect it with 96 percent accuracy. However, the doctor in the study and AI working together detected cancer with 99.5 percent accuracy. Humans will always be needed to holistically view and assess the data collected for and by AI. Daugherty noted that this fusion of human analysis and technology is key to the successful implementation and adoption of AI as a tool.
  • Businesses should invest in their people. According to Daugherty, companies must invest in reskilling or upskilling their workers. Look at Amazon.com, which announced earlier this year that it would spend $700 million on reskilling 100,000 employees over the next few years, training them in engineering and computer science because these skills are in such high demand. Investing in employees, whether or not they will stay with your company, is an imperative that every enterprise must take on, and it is one that will benefit your business and the workforce at large, Daugherty said.

    In a study conducted for his book, Human + Machine: Reimagining Work in the Age of AI, cowritten with H. James Wilson, Daugherty says that while 75 percent of business managers don’t think their employees have the right skills to remain relevant and effective in a work environment shaped by AI, only 3 percent of them are investing in training to help their employees meet new needs. If businesses don’t upskill their employees for the work of the future, they will quickly find themselves under-resourced. The available talent pool can’t fully satisfy rapidly changing job demands.
  • Technology should adapt to human needs, and not the other way around. Daugherty told the audience that he believes that “we’ve failed on the human dimension of technology for the last 30 years” by adapting to technology rather than treating it as something that should change to fit our needs. Daugherty noted that once the assumption that technology should adapt to us is embedded in the design and development of new technologies, humanistic skills will be of great use for training and implementing AI. Machines lack the human elements of empathy and emotion, and AI often absorbs the biases of the people training it. With AI becoming the face of companies—in the form of chatbots, for example—AI will need to be programmed to treat customers and make decisions in line with the company’s brand. People with skills in sociology, for instance, will need to be hired as “AI personality trainers” to ensure seamless brand agreement.
  • Companies must reimagine how they operate. Unlike the “process of reengineering”—defined by Daugherty as placing new technology within existing frameworks—the “process of reimagining” will require companies to reexamine and perhaps adjust their operations around what AI is capable of. This will allow new technologies to be utilized to their maximum potential, rather than being limited by older, often antiquated procedures, making business processes more streamlined and useful to the humans supporting them.
  • Trust is key. “In the age of AI, trust is the ultimate currency,” Daugherty said. Data is the foundation for AI, which has been able to develop more rapidly in the twenty-first century in part because of the amount of data we have started to collect via the Internet. And before they will offer data, consumers need to trust that corporations will use and store their information in an appropriate manner.

Going forward, Daugherty challenged the audience to forget assumptions about how AI will disrupt the world. Instead, Daugherty recommended, boards should focus on transparency, honesty, fairness, preparing the workforce, and analyzing how to use AI in ways that best serve their company’s brand and consumers overall.

Comments

Glenn GowOctober 06, 2019

Thank you for sharing this summary of Nora’s interview of Paul. A few points for board members to consider:

If humans are always needed to holistically view and assess the data for AI, then it’s imperative that board members understand the company’s approach to managing data. For example, even with humans involved, unless bias in the data is systematically uncovered and corrected, the result of AI work will be biased.

I agree that companies should invest in reskilling / upskilling employees. To oversee this, the board needs a realistic view of where AI will have an impact on jobs, and determine where reskilling / upskilling is even possible. It’s not true that in every case, employees can take on the new skills needed to move the company forward.

I agree that AI should adapt to human needs. Boards should put “points of customer interaction” in the spotlight. This area can lift a company’s customer satisfaction or do serious damage.

In addition to trust, honesty, fairness and preparing the workforce, Boards need to get the company’s legal, regulatory, compliance, data science and AI development teams working together. That’s the only way to trust the results.