Equifax is not just another organization that was breached. The company was named one of Forbes’ “World’s 100 Most Innovative Companies” for three years straight, from 2015 to 2017. The recent breach of the company’s U.S. online dispute portal web application has raised serious questions about whether boards of directors and senior management are asking the right questions about actions their organizations are taking to protect themselves from cyberthreats. Are boards probing to discover what they don’t know?
In September, Equifax announced a massive breach exposing the personal information of over 40 percent of the U.S. population. The company’s stock declined almost 14 percent after the announcement, and heads rolled over the ensuing three weeks—first the chief information officer (CIO) and chief information security officer (CISO), and then the CEO. The pervasive headline effect of this incident has been as persistent as any in memory.
There are many important aspects of cybersecurity that the board is expected to tend to, including understanding what the organization’s “crown jewels” are, business outcomes management seeks to avoid, understanding the ever-changing threat landscape, and having in place an effective incident response program, to name a few.
But this discussion is more specifically about the systems vulnerabilities we know about. That’s the elephant in the room.
The sage advice—if your flank is exposed, fortify it before you get overrun—seems to apply here. Even noncombatants understand the value of protecting exposed flanks in desperate battle. A known vulnerability is most certainly an exposed flank, particularly when sensitive data is involved.
Enter the role of software patches.
A patch is a software update installed into an existing program to fix new security vulnerabilities and bugs, address software stability issues, or add a new feature to improve usability or performance. Often a temporary fix, a patch is essentially a quick repair. While it’s not necessarily the best solution to address the problem, it gets the job done until product developers design a better solution for a subsequent product release.
The Equifax incident raises the question as to why the company didn’t implement the appropriate patch to its systems when the vulnerability was first identified. To be fair, other companies have suffered a cybersecurity event because they failed to implement a patch in a timely manner, and we have no insights into the unique circumstances at Equifax. Admittedly, patching software at a large organization with multiple, complex systems takes a considerable amount of time. But, for boards and executive teams everywhere, the Equifax episode serves as a stark reminder of the importance of understanding the company’s cybersecurity strategy and tactics to pinpoint whether they know what they need to know.
Often, in our security and privacy consulting business at Protiviti, we see companies implementing patches within 60 to 90 days of discovering a systems vulnerability. We have seen some high-risk patches not applied at all for fear of breaking legacy applications; in effect, the organization simply accepts the risk of not applying these patches and, as an alternative, works to mitigate it. Based on our experience, 30 days from release to deployment is typically the “gold standard” for the time it takes apply a patch.
Is the gold standard enough? Companies are essentially leaving themselves exposed for 30 days. Meanwhile, they may lack the advanced detection and response capabilities to detect unauthorized activity occurring during that time. Organizations with a well-designed vulnerability management program quickly patch known vulnerabilities for critical public-facing services. For example, we see companies setting service level agreement targets of 72 hours, with some striving for 24 hours or less to limit the damage of an attack.
Simply stated, boards need to inquire as to the target duration from release to deployment to shore up cybersecurity vulnerabilities and, if it’s 30 days (or more), question whether that is timely enough, especially when public-facing systems are involved and sensitive personal information is exposed. Today’s optics regarding egregious security breaches, corporate stewardship expectations, and the related impact on reputation and brand image cry out for this oversight.
It is vitally important to scan public-facing systems immediately upon notification of critical vulnerabilities; “same day” should be the target. In addition, patch deployment should be tracked and verified as part of a comprehensive information technology (IT) governance process. It’s not enough to merely push out a patch. A comprehensive IT governance process should confirm that the risk truly has been mitigated on a timely basis.
Directors and executives should also be concerned with the duration of significant breaches before they are finally detected. Our experience is that detective and monitoring controls remain immature across most industries, resulting in continued failure to detect breaches in a timely manner. Given the increasing sophistication of perpetrators, simulations of likely attack activity should be performed periodically to ensure that defenses can detect a breach and security teams can respond timely.
We know that an organization’s preparedness to reduce an incident’s impact and proliferation after it begins is an issue (i.e., the lapsed time between the inauguration of an attack and its detection is too long). Often, it takes over 100 days until suspicious activity is discovered; about 50 percent of the time, organizations learn of breaches through a third party.
In nearly every penetration test Protiviti conducts, the client authorizing the test fails to detect our test activity. Many organizations seem to think that if they outsource to a managed security service provider (MSSP), the problem will be solved —as if a box has been checked. However, we see time and again that this is not the case. Often, there are breakdowns in the processes and coordination between the company and the MSSP that result in attack activity occurring unnoticed. Not many organizations are focusing enough on this failure of detective controls to identify breach activity in a timely manner.
These two fronts—how long it takes to implement a patch, as well as detect a breach—inform the board’s cyber-risk oversight. Every organization should take a fresh look at the impact specific cybersecurity events can have and whether management’s response plan is properly oriented and sufficiently supported. For starters, directors should ensure they are satisfied with the elapsed time:
For patching identified system vulnerabilities;
Between the initiation of an attack and its ultimate discovery;
Between the discovery of a security breach and the initiation of the response plan to reduce its proliferation and impact; and
Between the discovery of a significant breach and the undertaking of the required disclosures to the public, regulators, and law enforcement in accordance with applicable laws and regulations.
Today’s optics regarding egregious security breaches, corporate stewardship expectations, and the related impact on reputation and brand image beg for careful oversight.
Organizational cybersecurity is one of the biggest challenges facing companies today. The most recent in a string of headline-grabbing data breaches involved U.S. credit-reporting company Equifax, an event that exposed the private information of some 143 million customers. Grilled on Capitol Hill about the episode, Equifax’s chair and CEO said that “mistakes were made” in the company’s response to the attack, which has prompted dozens of private lawsuits and precipitated a drop in the company’s share price.
As corporate directors are ultimately responsible for their companies’ future, the urgency to address cyber risk is accelerating. There is general agreement across the C-suite that cyber risk is a top priority, according to a recent Marsh global survey regarding corporate cyber risk perception. But survey results also revealed that there is less alignment inside companies regarding how cyber risk is reported to corporate directors and about what is most important.
The Information Disconnect Between Board and C-Suite
When survey respondents were asked what type of reporting on cyber risk the board of directors received, something surprising surfaced. For every type of report we asked about, respondents who indicated they were corporate directors said they received far less information than respondents from the C-suite said they were supplying to directors.
Click to enlarge in a new window.
For example, 18 percent of surveyed directors said they received information about investment initiatives for cybersecurity initiatives. Yet 47 percent of chief risk officers, 38 percent of chief technology or information officers, and 53 percent of chief information security officers said they were already providing reports to board members on investment initiatives.
Whether it’s optimizing risk finance though insurance or other resiliency measures, such investment initiatives are critical to preparing for an attack as well as to managing an incident. Organizations need to ensure that board members are receiving—and carefully reviewing—this vital information.
Tellingly, corporate directors say the type of cyber risk reporting they most often receive consists of briefings on “issues and events experienced.” It’s clearly important for any corporate director to learn about cybersecurity incidents that the company has faced, but it is an after-the-fact activity. There are a number of reasons for boards to be most cognizant of the material they receive regarding an event that has already happened.
Click to enlarge in a new window.
The survey’s C-suite respondents listed “cyber program investment initiatives” as the type of reporting their boards were most likely to be receiving. But with fewer than one-in-five corporate directors saying they received such reports, there is an issue that needs to be addressed, especially given that understanding—and directing—corporate investment in cybersecurity is a key to building effective resiliency measures.
No Incident Can Be Completely Avoided
Many boards seem to focus their oversight on security activities over resiliency best practices. For example, a high number of corporate directors in our survey said their organization did not have a cybersecurity incident response plan. Why? The top reason cited was that “cybersecurity/firewalls are adequate for preventing cyber breaches.” C-suite respondents did not share the same view.
Click to enlarge in a new window.
As firm after firm of all sizes and across geographies have fallen prey to attacks, the belief that one can have enough defenses in place to completely avoid a cybersecurity incident has been widely debunked by real-world events. Thus, the mantra among the organizations with the most sophisticated cyber-risk management programs is: “It’s not a matter of if you will be breached, but when.”
Cyber threats are constantly evolving and the potential threat actors are multiplying. No organization is impenetrable, no matter how strong their security posture may be.
Strong Companies Are Already Preparing for GDPR
One of our key findings regarding corporate readiness involves the lead-up to the EU’s General Data Protection Regulation (GDPR), which is scheduled to take effect in May 2018.
We found that companies that are already preparing for GDPR are doing more to address cyber risk overall than those that have yet to start planning. Survey respondents who said their organizations were actively working toward GDPR compliance—or felt that they were already compliant—were three times more likely to adopt overall cybersecurity measures and four times more likely to adopt cybersecurity resiliency measures than those that had not started planning for GDPR. This is happening despite the fact that the GDPR does not showcase a “prescriptive” set of regulations with a defined checklist of compliance activities. Instead, GDPR preparedness appears to be both a cause and consequence of overall cyber-risk management strength.
The most forward-looking corporate boards recognize the GDPR compliance process as an opportunity to strengthen their organizations’ overall cyber risk management posture on a much broader level, effectively transforming regulations that might previously have been viewed as a constraint as a new competitive advantage.
The lesson here—even for directors of organizations not subject to the GDPR—is that good cyber-risk oversight requires engaging on a number of fronts, both defensive and responsive. Whether it’s playing an active role in attracting highly-skilled talent, seeking cross-functional enterprise alignment on priorities, or viewing regulatory compliance as part of a holistic plan, an engaged board can make the critical difference in how a company assesses, reports on, and addresses the impact of cyber risk on the company.
To receive a copy of Marsh’s report, GDPR Preparedness: An Indicator of Cyber Risk Management, click here.
Undergraduate, graduate, and professional students of cybersecurity from around the world gathered earlier this year to participate in a cybersecurity competition that simulated the international policy challenges associated with a global cyberattack. While the goal was to practice sound policy decisions, the majority of competing teams unintentionally led the U.S. into starting an international war. Given a variety of diplomatic and other means of responding to cyberattacks, participants largely took the aggressive approach of hacking back in response to cyberattacks from China, and to disastrous consequences.
While the competition’s participants are all students today, they may well go on to be corporate directors and government leaders of tomorrow. Based on current debate about how organizations in the private sector should respond to cyberattacks, it seems the actions taken by these students may well be representative of a broader trend. In fact, there is enough of a push for organizations to be legally authorized to “hack back” that earlier this year a member of Congress proposed a bill to empower people “to defend themselves online, just as they have the legal authority to do during a physical assault.”
As a business leader, I believe this measure would do more harm than good.
What Is Hack Back?
Hack back, which is sometimes called counterstrike, is a term used to refer to an organization taking offensive action to pursue, and potentially subdue, cyberattackers that have targeted them. For the purposes of this article, I am specifically talking about action taken by private sector organizations that affects computers external to their own network. We are not discussing government actions, which tend to occur within existing legal frameworks and are subject to government oversight.
Hack back activities go beyond defensive measures that organizations may put in place to protect their environments. It is generally understood that hack back activities extend beyond the victim’s own network, systems, and assets, and may involve accessing, modifying, or damaging computers or networks that do not belong to the victim. Directors should note that today it is illegal under the Computer Fraud and Abuse Act for private parties to access or damage computer systems without authorization from the technology owners or an appropriate government entity, even if these systems are being used to attack you. That is what proponents of hack back want to change, and the proposed bill goes some way towards doing this.
The Case for “Self Defense”
In response to the legal restriction, proponents of a law to legalize hacking back at cyberattackers often argue that the same principle should apply as that which allows US citizens to defend themselves against intruders in their homes—even with violent force. While it may sound reasonable to implement equal force to defend a network, the Internet is a space of systems designed specifically for the purpose interacting and communicating. Technology and users are increasingly interconnected. As a result, it’s almost impossible to ensure that defensive action targeted at a specific actor or group of actors will only affect the intended targets.
The reality of the argument for hacking back in self-defense is unfortunately more akin to standing by your fence and lobbing grenades into the street, hoping to get lucky and stop an attacker as they flee. With such an approach, even if you do manage to reach your attacker, you’ll almost certainly cause terrible collateral damage. Can your organization afford to clean up such a mess? What would be the repercussions for your reputation and position in the marketplace?
Another significant challenge for private sector organizations looking to hack back is that, unlike governments, they typically do not have the large-scale, sophisticated intelligence gathering programs needed to accurately attribute cyberattacks to the correct actor. Attackers constantly change their techniques to stay one step ahead of defenders and law enforcement, including leveraging deception techniques. This means that even when there are indications that point to a specific attacker, it is difficult to verify that they have not been planted to throw off suspicion, or to incriminate another party.
Similarly, it is difficult to judge motivations accurately and to determine an appropriate response. There is a fear that once people have hack back in their arsenal, it will become the de facto response rather than using the broad range of options that exist otherwise. This is even more problematic when you consider that devices operating unwillingly as part of a botnet may be used to carry out an attack. These infected devices and their owners are as much victims of the attacker as the primary target. Any attempt to hack back could cause them more harm.
The Security Poverty Line
Should hack back be made a lawful response to a cyberattack, effective participation is likely to be costly, as the technique requires specialized skills. Not every organization will be able to afford to participate. If the authorization framework is not stringent, many organizations may try to participate with insufficient expertise, which is likely to be either ineffective or damaging, or potentially both. However, there are other organizations that will not have the maturity or budget to participate even in this way.
These are the same organizations that today cannot afford a great deal of in-house security expertise and technologies to protect themselves, and currently are also the most vulnerable. As organizations that do have sufficient resources begin to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift towards targeting the less-resourced organizations that reside below the security poverty line, increasing their vulnerability.
A Lawless Land
Creating a policy framework that provides sufficient oversight of hack-back efforts would be impractical and costly. Who would run it? How would it be funded? And why would this be significantly more desirable than the status quo? When the U.S. government takes action against attackers, they must meet a stringent burden of proof for attribution, and even when that has been done, there are strict parameters determining the types of targets that can be pursued, and the kind of action that can be taken.
Even if such a framework could be devised and policed, there would still be significant legal risks posed to a variety of stakeholders at a company. While the Internet is a borderless space accessed from every country in the world, each of those countries has their own legal system. Even if an American company was authorized to hack back, how could you ensure your organization would avoid falling afoul of the laws of another country, not to mention international law?
What Directors Can Do
The discussion around hacking back so far has largely been driven by hyperbole, fear, and indignation. Feelings of fear and indignation are certainly easy to relate to, and as corporate directors, powerlessness does not sit well with us. It is our instinct and duty to defend our organizations from avoidable harm.
The potential costs of a misstep or unintended consequences from hack back should deter business leaders from undertaking such an effort. If another company or a group of individuals is affected, the company that hacked back could see themselves incurring expensive legal proceedings, reputational damage, and loss of trust by many of their stakeholders. Attempts to make organizations exempt from this kind of legal action are problematic as it raises the question of how we can spot and stop accidental or intentional abuses of the system.
It’s one thing for students to unintentionally trigger war in the safe confines of a competitive mock scenario, and another thing entirely to be the business leader that does so in the real world. Directors of companies must instead work together to find better solutions to our complex cybersecurity problems. We should not legitimize vigilantism, particularly given the significant potential risks with dubious benefits.
Corey Thomas is CEO of Rapid7. All opinions expressed here are his own.