The rapid pace of technological advancements is causing tectonic shifts in the business risk landscape. Social media and artificial intelligence (AI) in particular are causing directors to reconsider how they think and talk about risk. Consequently, these topics were the focus of the first part of a roundtable discussion on the next generation of risk hosted by EisnerAmper LLP and the National Association of Corporate Directors (NACD) in New York last week.
There is an abundance of examples of companies that sustained severe reputational damage after being caught in the center of a social media storm. Most recently, credit reporting company Equifax made headlines after the company disclosed that it was the subject of a major data breach that compromised the information of roughly half of the U.S. population. The company’s offering of free credit monitoring to affected customers only made matters worse: several print and digital news outlets, including The New York Times, analyzed the terms of the offer, which suggested that by signing up for the service, a person relinquished his or her right to take legal action against Equifax. While the company later changed the legal language in another effort to assuage public concern, reestablishing its trustworthiness may be more of an uphill battle.
“Some of these things would have always been in the news, but the amount of time and the quickness with which news reaches an audience is unbelievable,” EisnerAmper Audit Partner Steven Kreit observed. “Boards need to make sure there’s a social media strategy throughout the company. Boards need to ask management what it has planned for and make sure they can react to those issues as they come up. It’s also important to have policies around social media. What is the CEO allowed to say? Are they allowed to have personal accounts and use that to disseminate company information?”
When attendees were asked if they knew their company’s social media policy backwards and forwards, few indicated that they did—but there was some debate as to how necessary this is. “I don’t think it’s appropriate for a board member to know the details of what the policy is,” one director opined. “What the board needs to know is that there’s a policy and that employees know what they can and cannot say about the company.”
Kreit agreed. “You don’t want to get too far into the weeds,” he said, “but a CEO may react to something in the middle of the night and that response may harm the company. And board members need to make sure the company doesn’t get hurt.”
While most of the discussion focused on preparing for the worst, one attendee observed that a company response plan that is effectively used to respond to negative feedback on social media can not only curb a damaging situation, but help to restore trust in the company.
Discussion then turned to AI. Here, some companies are ahead of the curve in applying technology that has the power to parse through massive amounts of data to make a determination about something. Take for example, IBM’s Watson, the supercomputer that famously competed on the game show Jeopardy!, facial recognition software and self-driving cars. Here, the risk is that AI is advancing so rapidly as a disruptor across nearly every industry. If a company isn’t paying attention now, the competition will leave it in the dust later. But AI is a broad subject area and identifying the elements that are most relevant to a board agenda—namely the risks—can initially seem daunting.
“These are conversations I rarely hear discussed around the boardroom table,” Kreit remarked. “And these are risks that keep changing.”
“An interesting exercise is to look at risk factors in public disclosures,” one attendee said. “We look at competitors and it’s easy to see what risks they are identifying in the same industry.”
“In the conversations I’ve had, it isn’t so much about whether the machine will do its own thing and crush humans as much as asking what fundamental technology are we not using to help us be more competitive and customer-focused,” one attendee offered. “The other thing is, technologists sometimes rely too much on technology. At some point, a human being has to put subjectivity in the mix to make sure the automated methodology you employed doesn’t come back and bite you. This conversation comes through the CISO [chief information security officer] on my board as well as the CTO [chief technology officer] together.” Another director remarked that these discussions take place on the audit committee level.
“It’s important to not think about technology and risk without it being an integral part of the strategy discussion,” another director piped in. “If it isn’t, I think it becomes an academic conversation and you’re walking ahead with one eye open and one eye closed.”
To this end, and in closing this portion of the roundtable, another attendee remarked on how board composition it critical in positioning the board to oversee this issue in the years ahead. “If you don’t have enough forward-looking people with experience from other industries, you’re doomed. Look at who you’re working with and have some sense of what you are [as an organization], what you want to be, and how you’re going to get there.”
Next week, the NACD Board Leaders’ Blog will feature roundtable discussion highlights that explore geopolitical and regulatory risks.
Back in January I reported on some of the innovative trends that I saw on my trip to the Consumer Electronics Show (CES). Nine months on, the evidence of those technologies’ impact is everywhere. I expect these disruptive innovations to be front and center this January, when the National Association of Corporate Directors (NACD) will host a small group of our members for a directorship-centric tour of CES, along with an exploration of the implications for strategy development and risk oversight.
While January is still four months away, we have been talking about these changes across our education events all year, including at the forthcoming Global Board Leaders’ Summit.
Here are a few examples of the how these trends are manifesting.
Artificial intelligence (AI) technology continues its march into the mainstream. Autonomous vehicles give us one window into how AI is fueling disruption and industry change. Self-driving cars have the potential to save thousands of lives. That mission is part of what inspired Sebastian Thrun to found X, Google’s semi-secret moonshot laboratory, to focus on the technology. Self-driving cars’ potential to disintermediate is largely unanticipated by most business outside of the automotive manufacturing and services industry.
At our August Master Class, Travelocity.com founder Terry B. Jones laid out a landscape of compelling examples of disruptions that will be caused by self-driving vehicles. “People say, well, the technology’s going to disrupt insurance and surely it will,” Jones said. “But it’s also going to disrupt hospitals. The number one reason people go to the emergency room is car wrecks. We’re not going to have car wrecks anymore. It’s going to disrupt hotels. You’ll just stay in the car and sleep while it drives. It’s going to disrupt police—we won’t need traffic tickets.”
Extend Jones’ last point about a decrease in traffic tickets to red light cameras and then consider that in 2015, municipalities collected more than $6 billion in revenue from speeding tickets alone. Self-driving cars could bankrupt whole cities that do not have the foresight to create new revenue sources.
The increasing level of cooperation between companies across verticals is changing the very nature of industries themselves. The acquisition of Whole Foods Market by Amazon.com has fueled anxiety across industries and driven once unlikely bedfellows to team up, spawning, among other things, a new partnership between Wal-Mart and Google. Part of the urgency behind the Google–Wal-Mart partnership is the dual realization that voice-enabled shopping is the future of retail, and that Amazon now has a significant advantage over both companies in that space. Amazon is forecast to have 70 percent of the voice-enabled speaker market this year.
As noted at CES, the increasing amount of technology in vehicles has effectively transformed cars into giant computer on wheels, forcing companies from Ford to Honda Motor Co. into an identity crisis. When considering the question, “Are we a technology or a car company,” increasingly the answer is “yes.”
At our Master Class in August, Bonny Simi, a director of Red Lion Hotels and the head of ventures for Jet Blue, explained how the airline missed an early opportunity to partner with and invest in a small car ridesharing startup that eight years later has a valuation nearly ten times that of the airline’s market cap. “At the time when the startup approached us, we didn’t think it was relevant to our business because we saw ourselves as an airline,” Simi noted. “We’ve realized we need to think of ourselves as a travel company.”
The next generation of disruption is about more than technology. Don’t underestimate social and demographic shifts in the market, and the power of changing attitudes and norms to create new competition. Younger consumers have different ideas when it comes to everything from privacy to shopping. The rise of companies like Lyft and Airbnb was enabled by mobile technology, but it was also made possible by a generation of younger people who didn’t hold traditional, sometimes negative attitudes about sharing a home, a car, or even a dress, with a stranger.
At last year’s spring Master Class, Peter H. Coors, former chair of the Molson Coors Brewing Co. and chair of MillerCoors, talked about how millennials’ distaste for big brands and an embrace of the small and bespoke was driving sales away from MillerCoors towards smaller, local craft beers. Younger consumers’ preference of supporting local and small businesses presents a threat to larger food and beverage producers.
Younger people are also turning to alternative payment methods. This year at another Master Class session, Jones shared a story about his 20-something, newly married daughter. Since she and her husband had not yet merged their finances, they were sharing money via online payment platforms. When they went into their local branch of a Fortune 500 bank to get a loan, “the loan officer demanded to know who this guy Venmo was that they had been sending so much money to.” The message, especially to more established companies, is that “the way you’ve always done it” isn’t going to win the day moving forward. Don’t overestimate the long-term viability of legacy products and systems.
The complexity of risk will continue to grow exponentially. Acclaimed technology guru Shelly Palmer focused on this concept heavily both at CES and when he addressed our members at the NACD Technology Symposium in Silicon Valley this past July. “The velocity of data is increasing and will always increase, then the value of that data is going to decrease because there’s just too much of it,” Palmer said. “You’re going to have to sort this out in some way.”
To illustrate his point, Palmer then showed a pond half full of lily pads. He asked the audience the following: if the growth doubled every day until day 30, what day was shown on the slide with the pond half full? The answer was day 29. “Human beings do not think exponentially,” Palmer pointed out. “We think in a linear way. The question is, when is it day 29 for any of the things you’re working on? That’s the speed with which technology is coming at you… You don’t need to manage change. You need to be in a mode of continuous improvement and adaptation.”
When you consider the risk side of so much interconnected data, it raises the stakes for everything from privacy to cyber-risk oversight. Companies that don’t have their eye firmly on the ball face consequences with increasingly higher-stakes implications for their business.
Questions to Ask Management
Directors would be wise to begin pressing their management team for briefings on their strategic plans. Below are several questions you could pose at your next board meeting.
Have we considered how these forces can provide a strategic advantage to us, either by creating new revenue streams or new efficiencies?
Have we considered the risks to our business, including how we could be disintermediated or how a particular disruptive force might create competition including from unlikely or unforeseen sources?
How are we thinking about innovation? Are we good at fostering it in house or should we look to outside partnerships to supercharge our efficiencies, products, and capabilities?
What are we doing internally, including review of compensation and incentive plans, to ensure new ideas get an open and fair hearing and aren’t killed off internally by managers who may not want to upset the status quo?
Are you ready to attend NACD’s CES Experience in January? Register now to be considered for a place in this exclusive tour that will highlight exciting disruptive innovations.
Undergraduate, graduate, and professional students of cybersecurity from around the world gathered earlier this year to participate in a cybersecurity competition that simulated the international policy challenges associated with a global cyberattack. While the goal was to practice sound policy decisions, the majority of competing teams unintentionally led the U.S. into starting an international war. Given a variety of diplomatic and other means of responding to cyberattacks, participants largely took the aggressive approach of hacking back in response to cyberattacks from China, and to disastrous consequences.
While the competition’s participants are all students today, they may well go on to be corporate directors and government leaders of tomorrow. Based on current debate about how organizations in the private sector should respond to cyberattacks, it seems the actions taken by these students may well be representative of a broader trend. In fact, there is enough of a push for organizations to be legally authorized to “hack back” that earlier this year a member of Congress proposed a bill to empower people “to defend themselves online, just as they have the legal authority to do during a physical assault.”
As a business leader, I believe this measure would do more harm than good.
What Is Hack Back?
Hack back, which is sometimes called counterstrike, is a term used to refer to an organization taking offensive action to pursue, and potentially subdue, cyberattackers that have targeted them. For the purposes of this article, I am specifically talking about action taken by private sector organizations that affects computers external to their own network. We are not discussing government actions, which tend to occur within existing legal frameworks and are subject to government oversight.
Hack back activities go beyond defensive measures that organizations may put in place to protect their environments. It is generally understood that hack back activities extend beyond the victim’s own network, systems, and assets, and may involve accessing, modifying, or damaging computers or networks that do not belong to the victim. Directors should note that today it is illegal under the Computer Fraud and Abuse Act for private parties to access or damage computer systems without authorization from the technology owners or an appropriate government entity, even if these systems are being used to attack you. That is what proponents of hack back want to change, and the proposed bill goes some way towards doing this.
The Case for “Self Defense”
In response to the legal restriction, proponents of a law to legalize hacking back at cyberattackers often argue that the same principle should apply as that which allows US citizens to defend themselves against intruders in their homes—even with violent force. While it may sound reasonable to implement equal force to defend a network, the Internet is a space of systems designed specifically for the purpose interacting and communicating. Technology and users are increasingly interconnected. As a result, it’s almost impossible to ensure that defensive action targeted at a specific actor or group of actors will only affect the intended targets.
The reality of the argument for hacking back in self-defense is unfortunately more akin to standing by your fence and lobbing grenades into the street, hoping to get lucky and stop an attacker as they flee. With such an approach, even if you do manage to reach your attacker, you’ll almost certainly cause terrible collateral damage. Can your organization afford to clean up such a mess? What would be the repercussions for your reputation and position in the marketplace?
Another significant challenge for private sector organizations looking to hack back is that, unlike governments, they typically do not have the large-scale, sophisticated intelligence gathering programs needed to accurately attribute cyberattacks to the correct actor. Attackers constantly change their techniques to stay one step ahead of defenders and law enforcement, including leveraging deception techniques. This means that even when there are indications that point to a specific attacker, it is difficult to verify that they have not been planted to throw off suspicion, or to incriminate another party.
Similarly, it is difficult to judge motivations accurately and to determine an appropriate response. There is a fear that once people have hack back in their arsenal, it will become the de facto response rather than using the broad range of options that exist otherwise. This is even more problematic when you consider that devices operating unwillingly as part of a botnet may be used to carry out an attack. These infected devices and their owners are as much victims of the attacker as the primary target. Any attempt to hack back could cause them more harm.
The Security Poverty Line
Should hack back be made a lawful response to a cyberattack, effective participation is likely to be costly, as the technique requires specialized skills. Not every organization will be able to afford to participate. If the authorization framework is not stringent, many organizations may try to participate with insufficient expertise, which is likely to be either ineffective or damaging, or potentially both. However, there are other organizations that will not have the maturity or budget to participate even in this way.
These are the same organizations that today cannot afford a great deal of in-house security expertise and technologies to protect themselves, and currently are also the most vulnerable. As organizations that do have sufficient resources begin to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift towards targeting the less-resourced organizations that reside below the security poverty line, increasing their vulnerability.
A Lawless Land
Creating a policy framework that provides sufficient oversight of hack-back efforts would be impractical and costly. Who would run it? How would it be funded? And why would this be significantly more desirable than the status quo? When the U.S. government takes action against attackers, they must meet a stringent burden of proof for attribution, and even when that has been done, there are strict parameters determining the types of targets that can be pursued, and the kind of action that can be taken.
Even if such a framework could be devised and policed, there would still be significant legal risks posed to a variety of stakeholders at a company. While the Internet is a borderless space accessed from every country in the world, each of those countries has their own legal system. Even if an American company was authorized to hack back, how could you ensure your organization would avoid falling afoul of the laws of another country, not to mention international law?
What Directors Can Do
The discussion around hacking back so far has largely been driven by hyperbole, fear, and indignation. Feelings of fear and indignation are certainly easy to relate to, and as corporate directors, powerlessness does not sit well with us. It is our instinct and duty to defend our organizations from avoidable harm.
The potential costs of a misstep or unintended consequences from hack back should deter business leaders from undertaking such an effort. If another company or a group of individuals is affected, the company that hacked back could see themselves incurring expensive legal proceedings, reputational damage, and loss of trust by many of their stakeholders. Attempts to make organizations exempt from this kind of legal action are problematic as it raises the question of how we can spot and stop accidental or intentional abuses of the system.
It’s one thing for students to unintentionally trigger war in the safe confines of a competitive mock scenario, and another thing entirely to be the business leader that does so in the real world. Directors of companies must instead work together to find better solutions to our complex cybersecurity problems. We should not legitimize vigilantism, particularly given the significant potential risks with dubious benefits.
Corey Thomas is CEO of Rapid7. All opinions expressed here are his own.