Tag Archive: cyber-risk oversight

Hacking Back Will Hold Companies Back

Published by

Corey E. Thomas

Undergraduate, graduate, and professional students of cybersecurity from around the world gathered earlier this year to participate in a cybersecurity competition that simulated the international policy challenges associated with a global cyberattack. While the goal was to practice sound policy decisions, the majority of competing teams unintentionally led the U.S. into starting an international war. Given a variety of diplomatic and other means of responding to cyberattacks, participants largely took the aggressive approach of hacking back in response to cyberattacks from China, and to disastrous consequences.

While the competition’s participants are all students today, they may well go on to be corporate directors and government leaders of tomorrow. Based on current debate about how organizations in the private sector should respond to cyberattacks, it seems the actions taken by these students may well be representative of a broader trend. In fact, there is enough of a push for organizations to be legally authorized to “hack back” that earlier this year a member of Congress proposed a bill to empower people “to defend themselves online, just as they have the legal authority to do during a physical assault.”

As a business leader, I believe this measure would do more harm than good.

What Is Hack Back?

Hack back, which is sometimes called counterstrike, is a term used to refer to an organization taking offensive action to pursue, and potentially subdue, cyberattackers that have targeted them. For the purposes of this article, I am specifically talking about action taken by private sector organizations that affects computers external to their own network. We are not discussing government actions, which tend to occur within existing legal frameworks and are subject to government oversight.

Hack back activities go beyond defensive measures that organizations may put in place to protect their environments. It is generally understood that hack back activities extend beyond the victim’s own network, systems, and assets, and may involve accessing, modifying, or damaging computers or networks that do not belong to the victim. Directors should note that today it is illegal under the Computer Fraud and Abuse Act for private parties to access or damage computer systems without authorization from the technology owners or an appropriate government entity, even if these systems are being used to attack you. That is what proponents of hack back want to change, and the proposed bill goes some way towards doing this.

The Case for “Self Defense”

In response to the legal restriction, proponents of a law to legalize hacking back at cyberattackers often argue that the same principle should apply as that which allows US citizens to defend themselves against intruders in their homes—even with violent force. While it may sound reasonable to implement equal force to defend a network, the Internet is a space of systems designed specifically for the purpose interacting and communicating. Technology and users are increasingly interconnected. As a result, it’s almost impossible to ensure that defensive action targeted at a specific actor or group of actors will only affect the intended targets.

The reality of the argument for hacking back in self-defense is unfortunately more akin to standing by your fence and lobbing grenades into the street, hoping to get lucky and stop an attacker as they flee. With such an approach, even if you do manage to reach your attacker, you’ll almost certainly cause terrible collateral damage. Can your organization afford to clean up such a mess? What would be the repercussions for your reputation and position in the marketplace?

Blame Game

Another significant challenge for private sector organizations looking to hack back is that, unlike governments, they typically do not have the large-scale, sophisticated intelligence gathering programs needed to accurately attribute cyberattacks to the correct actor. Attackers constantly change their techniques to stay one step ahead of defenders and law enforcement, including leveraging deception techniques. This means that even when there are indications that point to a specific attacker, it is difficult to verify that they have not been planted to throw off suspicion, or to incriminate another party.

Similarly, it is difficult to judge motivations accurately and to determine an appropriate response. There is a fear that once people have hack back in their arsenal, it will become the de facto response rather than using the broad range of options that exist otherwise. This is even more problematic when you consider that devices operating unwillingly as part of a botnet may be used to carry out an attack. These infected devices and their owners are as much victims of the attacker as the primary target. Any attempt to hack back could cause them more harm.

The Security Poverty Line

Should hack back be made a lawful response to a cyberattack, effective participation is likely to be costly, as the technique requires specialized skills. Not every organization will be able to afford to participate. If the authorization framework is not stringent, many organizations may try to participate with insufficient expertise, which is likely to be either ineffective or damaging, or potentially both. However, there are other organizations that will not have the maturity or budget to participate even in this way.

These are the same organizations that today cannot afford a great deal of in-house security expertise and technologies to protect themselves, and currently are also the most vulnerable. As organizations that do have sufficient resources begin to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift towards targeting the less-resourced organizations that reside below the security poverty line, increasing their vulnerability.

A Lawless Land

Creating a policy framework that provides sufficient oversight of hack-back efforts would be impractical and costly. Who would run it? How would it be funded? And why would this be significantly more desirable than the status quo? When the U.S. government takes action against attackers, they must meet a stringent burden of proof for attribution, and even when that has been done, there are strict parameters determining the types of targets that can be pursued, and the kind of action that can be taken.

Even if such a framework could be devised and policed, there would still be significant legal risks posed to a variety of stakeholders at a company. While the Internet is a borderless space accessed from every country in the world, each of those countries has their own legal system. Even if an American company was authorized to hack back, how could you ensure your organization would avoid falling afoul of the laws of another country, not to mention international law?

What Directors Can Do

The discussion around hacking back so far has largely been driven by hyperbole, fear, and indignation. Feelings of fear and indignation are certainly easy to relate to, and as corporate directors, powerlessness does not sit well with us. It is our instinct and duty to defend our organizations from avoidable harm.

The potential costs of a misstep or unintended consequences from hack back should deter business leaders from undertaking such an effort. If another company or a group of individuals is affected, the company that hacked back could see themselves incurring expensive legal proceedings, reputational damage, and loss of trust by many of their stakeholders. Attempts to make organizations exempt from this kind of legal action are problematic as it raises the question of how we can spot and stop accidental or intentional abuses of the system.

It’s one thing for students to unintentionally trigger war in the safe confines of a competitive mock scenario, and another thing entirely to be the business leader that does so in the real world. Directors of companies must instead work together to find better solutions to our complex cybersecurity problems. We should not legitimize vigilantism, particularly given the significant potential risks with dubious benefits.

 

Corey Thomas is CEO of Rapid7. All opinions expressed here are his own.

Three Steps to Improving Cybersecurity Oversight in the Boardroom

Published by

Robert P. Silvers is a respected expert on Internet of Things security and effective corporate planning and response to cybersecurity incidents. Silvers is a partner at Paul Hastings and previously served as the Obama administration’s assistant secretary for cyber policy at the U.S. Department of Homeland Security. Silvers will speak at NACD’s 2017 Global Board Leaders’ Summit in October and NACD’s Technology Symposium in July.

Robert P. Silvers

Cybersecurity breaches pose a growing threat to any organization. As we’ve seen in recent years, and indeed in recent weeks, the most sophisticated companies and even governments aren’t immune from cyberattack. Ransomware has become a global menace, and payment data and customers’ personal information are routinely swiped and sold on the “dark web” in bulk. Next-generation Internet of Things devices are wowing consumers, but they are also targets, as Internet connectivity becomes standard-issue in more and more product lines.

How do directors prepare for this landscape? Everyone now acknowledges the importance of cybersecurity, but it is daunting to begin to think about implementing a cybersecurity plan because it’s technical, fast-moving, and has no “silver-bullet” solutions. Most boards now consult regularly with the organization’s information security team, but the discussions can be frustrating because it’s hard to gauge readiness and where the organization really stands in comparison to its peers. Sometimes directors confide in me, quietly and on the sidelines, that their real cybersecurity strategy is one of hope and prayer.

There are steps directors can take now to prepare for incidents so that when they occur the company’s response is well oiled. With the right resources and preparation, boards can safely navigate these difficult and unforeseen situations. Three key strategies can assist directors as they provide oversight for cybersecurity risks:

  • Building relationships with law enforcement officials
  • Having incident response plans in place (and practicing them)
  • Staying educated on cybersecurity trends

1. Building Relationships With Law Enforcement Officials

It’s no secret that relationships are central to success. Building the right relationships now, before your worst-case scenario happens, will help manage the situation. The Federal Bureau of Investigation is generally the lead federal investigative agency when it comes to cybercrime, and the United States Secret Service also plays an important role in the financial services and payment systems sectors.

Boards should ensure company management educates law enforcement officials from these agencies about the company’s business and potential risks. In turn, the company should ask law enforcement to keep it apprised of emergent threats in real time. There should also be designated points of contact on each side to allow for ongoing communications and make it clear whom to contact during an incident. This is critical to ensuring that the company has allies already in place in the event that a cyberattack occurs.

2. Having—and Practicing—Incident Response Plans

Directors should ask to see copies of the company’s written cyberbreach response plan. This document is essential. A good incident response plan addresses the many parallel efforts that will need to take place during a cyberattack, including:

a. Technical investigation and remediation;
b. Public relations messaging;
c. Managing customer concern and fallout;
d. Managing human resources issues, particularly if employee data has been stolen or if the perpetrator of the attack is a rogue employee;
e. Coordination with law enforcement; and
f. Coordination with regulators and preparedness for the civil litigation that increasingly follows cyberattacks.

An incident response plan is only valuable if it is updated, if all the relevant divisions within a company are familiar with it, and if these divisions have “buy in” to the process. If the plan is old or a key division doesn’t feel bound by it, the plan isn’t going to work. Directors should insist the plan be updated regularly and that the company’s divisions exercise the plan through simulated cyber incidents, often called “table-top exercises.” Indeed, table-top exercises for the board itself can be an excellent way to familiarize directors with the company’s incident response plan and its cyber posture more generally.

3. Staying educated on cyber security trends

As your board is building relationships with law enforcement officials and preparing an incident response plan, directors should also be educating themselves on cyber risk. Cybersecurity becomes more approachable as you invest the time to learn—and it’s a fascinating subject that directors enjoy thinking about. Do you know what a breach will look like for your company? What protocols do you have in place in case something happens?

According to the 2016–2017 NACD Public Company Governance Survey, 89 percent of public company directors said cybersecurity is discussed regularly during board meetings. Since a majority of directors in the room agree that cybersecurity is worth discussing, directors should collectively and individually prioritize learning the ins and outs of cyber risks.

One easy way to stay up to date on the latest is to ask the company’s information technology security team for periodic reports of the most significant security events that the company has encountered. This will give directors a feel for the rhythm of threats the company faces day in and day out.

Another option is for directors to take a professional course and get certified. The NACD Cyber-Risk Oversight Program is a great example of a course designed to help directors enhance their cybersecurity literacy and strengthen the board’s role in providing oversight for cyber preparedness. Consider these options to keep yourself as educated and informed as possible.
The more you can prepare individually, the better off you will be when you have to provide oversight for a cybersecurity breach at your company.

 

Questions to Ask After the WannaCry Attack

Published by

Major General (Ret.) Brett Williams

After last week’s devastating global ransomware attack, now known as WannaCry, directors will once again be questioning management teams to make sure the company is protected. The challenge is that most directors do not know what questions they should be asking.

If I were sitting on a board, this attack would prompt me to ask questions about the following three areas:

  • End of Life (EOL) software;
  • patching; and
  • disaster recovery.

EOL Software. EOL software is software that is no longer supported by the company that developed it in the first place, meaning that it is not updated or patched to protect against emerging threats. WannaCry took advantage of versions of the Microsoft Windows operating system that were beyond EOL and had well-known security vulnerabilities.

Typically, a company runs EOL software because they have a critical application that requires customized software that cannot run on a current operating system. This situation might force you to maintain an EOL version of Windows, for example, to run the software. In the instance of WannaCry, Windows XP and 8 in particular were targeted. Boards should be asking what risks are we taking by allowing management to continue running EOL software. Are there other options? Could we contract for the development of a new solution? If not, what measures have we taken to mitigate risks presented by relying on EOL software?

Other times companies run EOL software because they do not want to pay for the new software or they expect a level of unacceptable operational friction to occur during the transition from the old version to the new. Particularly in a large, complex environment the cross-platform dependencies can be difficult to understand and predict. Again, it is a risk assessment. What is the risk of running the outdated software, particularly when it supports a critical business function? If the solution is perceived as unaffordable, how does the cost of a new solution compare to the cost of a breach? Directors should also ask where are we running EOL software and why.

Patching. Software companies regularly release updates to their software called patches. The patches address performance issues, fix software bugs, add functionality, and eliminate security vulnerabilities. At any one time, even a mid-sized company could have a backlog of hundreds of patches that have not been applied. This backlog develops for a variety of reasons, but the most central issue is that information technology staff are concerned that applying the patch may “break” some process or software integration and impact the business. This is a valid concern.

In the case of WannaCry, Microsoft issued a patch in March  that would eliminate the vulnerability that allowed the malware to spread. Two months later, hundreds of thousands of machines remained unpatched and were successfully compromised.

Directors should ask for a high-level description of the risk management framework applied to the patching process. Do we treat critical patches differently than we treat lower-grade patches? Have we identified the software that supports critical business processes and apply a different time standard to apply patches there? If a patch will close a critical security vulnerability, but may also disrupt a strategic business function, are the leaders at the appropriate level of the business planning to manage disruption while also securing the enterprise? Have we invested in solutions that expedite the patching process so that we can patch as efficiently as possible?

Disaster Recovery. It is considered a disaster when your company ceases to execute core business functions because of a cyberattack. In the case of WannaCry, many businesses, including essential medical facilities in the United Kingdom, could not function. WannaCry was a potent example of how a cyberattack, which is an abstract concept for many business leaders, can have devastating impact in the physical world.

One aspect of disaster recovery is how quickly a company can recover data that has been encrypted or destroyed. Directors should have a strategic view of the data backup and recovery process. Have we identified the critical data that must be backed up? Have we determined the period of time the backup needs to cover and how quickly we need to be able to switch to the backup? Have we tested ourselves to prove that we could successfully pivot to the backup? What business impact is likely to occur?

The hospitals impacted by WannaCry present another angle of the disaster recovery scenario. For these hospitals, the disaster wasn’t limited to the loss of data. Most medical devices in use today interface with a computer for command and control of that device. During this attack, those command and control computers were rendered inoperative when the ransomware encrypted the software that allows the control computer to issue commands to the connected device. In many cases there is no way to revert to “manual” control. This scenario is particularly troubling given the potential to cause bodily harm.

It is easy to see a similar attack in a manufacturing plant where a control unit could be disabled bringing an assembly line to a halt. And it is not hard to imagine a threat to life and limb in a scenario where we rely on computer control to maintain temperatures and pressures at a safe level in a nuclear power plant.

Directors should ask about the process to recover control of critical assets. Can we activate backup systems that were not connected to the network at the time of the attack? If we bring the backup system on line, how do we know it will not be infected by the same malware? Have the appropriate departments practiced recovery process scenarios? What was the level of business disruption? Does everyone in the company know his or her role in getting critical operations back up and running?

Directors provide oversight of the risk management process—they do note execute the process. Understanding how the company is managing risk around EOL software, patching, and disaster recovery sets the right tone at the top and ensures that the company is better prepared for the inevitable next round of attacks.


Major General (Retired) Brett Williams is a co-founder of IronNet Cybersecurity and the former Director of Operations at U.S. Cyber Command. He is an NACD Board Governance Fellow and faculty member with NACD’s Board Advisory Services where he conducts in-depth cyber-risk oversight seminars for member boards. Brett is also a noted keynote speaker on a variety of cyber related topics.

Looking to strengthen your board’s cyber-risk oversight? Click here to review NACD’s Cyber-Risk Oversight Board Resource Center.