Tag Archive: cybersecurity

Emerging Governance Lessons from Equifax

Published by

Michael Peregrine

It’s way too early to make any judgments on board conduct in the Equifax controversy. That’ll be for the courts to decide, and they’ll take a long time getting there. But it’s not too early to draw some useful governance lessons from the situation, if media reports are to be believed. And these are lessons that apply regardless of whether the board serves a publicly held, privately owned or nonprofit corporation.

Some of these lessons relate to the board’s crisis management responsibilities. Others relate to the oversight of the board-CEO relationship. Still others invoke expectations of board cybersecurity oversight.

All of the possible lessons are premised on the increasing recognition of the inevitably of crisis, be it black swan or foreseeable, cybersecurity-related or “from out of left field.” For most complex enterprises, crises are just going to happen. The only questions are when, how big the crisis will be, and from what direction it will come. The most prescient of boards will embrace this inevitably and prepare for a corporate governance version of Defcon 3.

The other lessons are more practical in nature.

1. Emergency Succession  The swiftness of Mr. Smith’s removal speaks to the “nuts and bolts” value of having an emergency executive succession plan. The sudden Smith transition is a shocking example of how emergency succession applies to circumstances beyond customary triggers such as death, health care and family considerations. In today’s crisis-oriented environment, the need to separate from, and replace even the youngest, seasoned and most successful executives can arise at a moment’s notice.

Succession is a part of the board’s basic responsibilities that often gets lost amid the confluence of best practices and consultant messaging. Such planning can be complicated. According to the New York Times, the Equifax board regarded many of its original replacement candidates as “tainted” by ties to the cyber breach—including some executives who are believed to have sold company stock after the breach was discovered but before it was disclosed to the public.

2. Structuring the Separation There’s also the need to anticipate both the classification and the financial terms of executive separation in the context of a crisis environment. According to media reports, Mr. Smith’s separation was described as a retirement. Yet, the board announced that it was reserving the right to retroactively classify the separation as for-cause termination, based upon the ultimate findings of a board special committee charged with the responsibility for reviewing the data breach. Such a reclassification would have obvious and material implications for Mr. Smith’s compensation arrangements, including valuable stock awards.

This action by the Equifax board reflects several key realities of the crisis environment.

  • It will often be difficult to fairly ascertain the presence of cause for termination purposes in the direct aftermath of a crisis. The consideration of the results of an internal investigation may be a necessary and equitable precondition.
  • While not yet considered best practice, the use of clawbacks and other forms of executive compensation disgorgement arrangements is increasingly viewed as an effective response to executive fraud, malfeasance, or other misconduct. Clawback application has most recently been demonstrated by the actions of a financial services company board in response to a significant corporate controversy.
  • Boards must face the harsh reality of the need to impose separation in advance of intense scrutiny by the media, regulators, and possibly even legislators. The sometimes corporate brutality of “throwing executives under the bus” may be perceived as both part of an effective board response (i.e., to demonstrate board accountability), and necessary to preserve the reputation of the company and the interests of its stakeholders. According to the Wall Street Journal, the departures of the Equifax information officer and chief security officer were not considered by the board to be actions significant enough in stature. Thus, the concept of “strict accountability” for executives in the context of major corporate controversies may increasingly be considered an indirect part of the compact between the board and management.

3. The Standard of Conduct  Another lesson is for the board to reconsider the effectiveness of its own cybersecurity oversight efforts. The leading judicial decisions have to date established a high Caremark-style barrier for demonstrating breach of cybersecurity oversight responsibilities. Notable in this regard was the decision of the court in the Home Depot case to extend the protection of the business judgment rule to the board’s conduct, despite its clearly expressed concerns about the speed with which the board implemented protective measures.

However, boards should not place unreasonable reliance on Caremark protection. As instances of cyberbreaches become more egregious, it is reasonable to project a stricter approach to director liability in future cases.

4. The Self-Critique Perhaps the most basic governance lesson from Equifax is the need for board self-evaluation. Any board-driven internal investigation of a corporate controversy will benefit from consideration of the adequacy of the full board’s related oversight efforts. For example, the Wall Street Journal reported that weaknesses in Equifax’s cybersecurity measures were “apparent to outside observers in the months before the hack.” Was the board made aware of these weaknesses? If not, why not? Such a self-critique has been an accepted component of truly comprehensive internal investigations since the “Powers Report” from the Enron board. The willingness to consider how possible governance inadequacies may have contributed to crises can serve as a powerful demonstration of the board’s good faith and assumption of ultimate responsibility.

Equifax is not, as some have characterized it, the second coming of Enron. That’s unnecessary hyperbole at this point. As exaggerated as commentary may be, what is known about the crisis offers a valuable teaching moment to boards about expectations of fiduciary conduct in crisis situations, cybersecurity or otherwise.

Michael W. Peregrine, a partner in McDermott Will & Emery, advises corporations, officers and directors on matters relating to corporate governance, fiduciary duties and officer/director liability issues. His views are his own and do not necessarily reflect the views of McDermott Will & Emery, its clients, or NACD.

Hacking Back Will Hold Companies Back

Published by

Corey E. Thomas

Undergraduate, graduate, and professional students of cybersecurity from around the world gathered earlier this year to participate in a cybersecurity competition that simulated the international policy challenges associated with a global cyberattack. While the goal was to practice sound policy decisions, the majority of competing teams unintentionally led the U.S. into starting an international war. Given a variety of diplomatic and other means of responding to cyberattacks, participants largely took the aggressive approach of hacking back in response to cyberattacks from China, and to disastrous consequences.

While the competition’s participants are all students today, they may well go on to be corporate directors and government leaders of tomorrow. Based on current debate about how organizations in the private sector should respond to cyberattacks, it seems the actions taken by these students may well be representative of a broader trend. In fact, there is enough of a push for organizations to be legally authorized to “hack back” that earlier this year a member of Congress proposed a bill to empower people “to defend themselves online, just as they have the legal authority to do during a physical assault.”

As a business leader, I believe this measure would do more harm than good.

What Is Hack Back?

Hack back, which is sometimes called counterstrike, is a term used to refer to an organization taking offensive action to pursue, and potentially subdue, cyberattackers that have targeted them. For the purposes of this article, I am specifically talking about action taken by private sector organizations that affects computers external to their own network. We are not discussing government actions, which tend to occur within existing legal frameworks and are subject to government oversight.

Hack back activities go beyond defensive measures that organizations may put in place to protect their environments. It is generally understood that hack back activities extend beyond the victim’s own network, systems, and assets, and may involve accessing, modifying, or damaging computers or networks that do not belong to the victim. Directors should note that today it is illegal under the Computer Fraud and Abuse Act for private parties to access or damage computer systems without authorization from the technology owners or an appropriate government entity, even if these systems are being used to attack you. That is what proponents of hack back want to change, and the proposed bill goes some way towards doing this.

The Case for “Self Defense”

In response to the legal restriction, proponents of a law to legalize hacking back at cyberattackers often argue that the same principle should apply as that which allows US citizens to defend themselves against intruders in their homes—even with violent force. While it may sound reasonable to implement equal force to defend a network, the Internet is a space of systems designed specifically for the purpose interacting and communicating. Technology and users are increasingly interconnected. As a result, it’s almost impossible to ensure that defensive action targeted at a specific actor or group of actors will only affect the intended targets.

The reality of the argument for hacking back in self-defense is unfortunately more akin to standing by your fence and lobbing grenades into the street, hoping to get lucky and stop an attacker as they flee. With such an approach, even if you do manage to reach your attacker, you’ll almost certainly cause terrible collateral damage. Can your organization afford to clean up such a mess? What would be the repercussions for your reputation and position in the marketplace?

Blame Game

Another significant challenge for private sector organizations looking to hack back is that, unlike governments, they typically do not have the large-scale, sophisticated intelligence gathering programs needed to accurately attribute cyberattacks to the correct actor. Attackers constantly change their techniques to stay one step ahead of defenders and law enforcement, including leveraging deception techniques. This means that even when there are indications that point to a specific attacker, it is difficult to verify that they have not been planted to throw off suspicion, or to incriminate another party.

Similarly, it is difficult to judge motivations accurately and to determine an appropriate response. There is a fear that once people have hack back in their arsenal, it will become the de facto response rather than using the broad range of options that exist otherwise. This is even more problematic when you consider that devices operating unwillingly as part of a botnet may be used to carry out an attack. These infected devices and their owners are as much victims of the attacker as the primary target. Any attempt to hack back could cause them more harm.

The Security Poverty Line

Should hack back be made a lawful response to a cyberattack, effective participation is likely to be costly, as the technique requires specialized skills. Not every organization will be able to afford to participate. If the authorization framework is not stringent, many organizations may try to participate with insufficient expertise, which is likely to be either ineffective or damaging, or potentially both. However, there are other organizations that will not have the maturity or budget to participate even in this way.

These are the same organizations that today cannot afford a great deal of in-house security expertise and technologies to protect themselves, and currently are also the most vulnerable. As organizations that do have sufficient resources begin to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift towards targeting the less-resourced organizations that reside below the security poverty line, increasing their vulnerability.

A Lawless Land

Creating a policy framework that provides sufficient oversight of hack-back efforts would be impractical and costly. Who would run it? How would it be funded? And why would this be significantly more desirable than the status quo? When the U.S. government takes action against attackers, they must meet a stringent burden of proof for attribution, and even when that has been done, there are strict parameters determining the types of targets that can be pursued, and the kind of action that can be taken.

Even if such a framework could be devised and policed, there would still be significant legal risks posed to a variety of stakeholders at a company. While the Internet is a borderless space accessed from every country in the world, each of those countries has their own legal system. Even if an American company was authorized to hack back, how could you ensure your organization would avoid falling afoul of the laws of another country, not to mention international law?

What Directors Can Do

The discussion around hacking back so far has largely been driven by hyperbole, fear, and indignation. Feelings of fear and indignation are certainly easy to relate to, and as corporate directors, powerlessness does not sit well with us. It is our instinct and duty to defend our organizations from avoidable harm.

The potential costs of a misstep or unintended consequences from hack back should deter business leaders from undertaking such an effort. If another company or a group of individuals is affected, the company that hacked back could see themselves incurring expensive legal proceedings, reputational damage, and loss of trust by many of their stakeholders. Attempts to make organizations exempt from this kind of legal action are problematic as it raises the question of how we can spot and stop accidental or intentional abuses of the system.

It’s one thing for students to unintentionally trigger war in the safe confines of a competitive mock scenario, and another thing entirely to be the business leader that does so in the real world. Directors of companies must instead work together to find better solutions to our complex cybersecurity problems. We should not legitimize vigilantism, particularly given the significant potential risks with dubious benefits.

 

Corey Thomas is CEO of Rapid7. All opinions expressed here are his own.

The Corporate Director’s Guide to GDPR

Published by

On May 25, 2018, a major new piece of data protection regulation will come into effect across the European Union (EU), and with it comes the potential for hefty fines or penalties for your organization. Even if you do not directly operate in the EU, chances are that the General Data Protection Regulation (GDPR) still pertains to your company.

Corey Thomas

The regulation covers any entity that processes the personal data of EU citizens (referred to as “data subjects”), even if the organization does not provide goods or services to EU citizens and only handles or processes their data. Unless you are categorically sure that your organization does not and will not process EU citizens’ personal data, compliance is not optional.

The fine for an infringement can be €20 million (approximately $23 million at today’s exchange rate), or 4 percent of your worldwide annual turnover, depending on which is the higher amount. It is essential for directors to pay attention to the data and information security practices in place to ensure that the organization is prepared and compliant.

The Policy Details of GDPR

The GDPR was written to ensure that organizations:

  • protect the personal data of ‘EU Natural Persons’ (i.e. living people);
  • are transparent, fair, and lawful about the processing of personal data;
  • only request and process necessary personal data;
  • do not share data with third parties or countries unless the correct legal agreements and processes are implemented; and
  • gain consent from data subjects to process their data.

Personal data is defined in the policy as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

There are six principles that apply to the processing of personal data. According to the policy, personal data shall be:

  • processed lawfully, fairly, and in a transparent manner;
  • collected for specified, explicit, and legitimate purposes;
  • adequate, relevant, and limited to what is necessary;
  • accurate and, where necessary, kept up to date;
  • kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; and
  • processed in a manner that ensures appropriate security of the personal data.

Data subjects are provided with a set of legal rights under GDPR, including the right:

Each EU member state has a designated supervisory authority. These regulatory bodies are responsible for monitoring the application of GDPR, and have the power to audit organizations and determine relevant warnings, reprimands, and fines for violations of the organization. When breaches of personal data occur, companies will be subject to a high level of scrutiny, and will have only a 72-hour window to report on the breach. A personal data breach is described as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.”

There is a requirement for some organizations to appoint a data protection officer (DPO), whose responsibility it is to advise and inform on GDPR and to monitor compliance within the organization. The DPO acts as the main contact for both data subjects and the supervisory authority, must report to the highest level of management within the organization, and cannot perform any tasks or duties which result in a conflict of interest.

You need to ensure your organization has fully investigated the nuances of the requirements to ascertain whether you need to appoint such a role or prepare to meet other personnel or technical demands.

Where do we start?

Your organization first needs to define the team that will drive GDPR compliance and management. Within the C-suite this should include the chief information officer and the chief information security officer, in addition to representatives from legal counsel, human resources, risk and compliance, and privacy. Determine if you need to appoint a DPO. Once your team is assembled, assess your current state, so that you can plan next steps accordingly. This team should present results at least to your board’s audit committee, if not the full board, given the financial and reputational risks involved.

Understand your personal data retention

You should ask your GDPR team the following questions to determine what categories of personal data your organization is dealing with:

  • To whom does data you collect and retain pertain?
  • Is it necessary to collect and keep this data?
  • If so, how long do you need to keep it?
  • Do you have permission from the data subject to process the data?
  • How is consent obtained from data subjects for each method of personal data collection?

Encourage your team to follow others’ personal data on its journey through and beyond the organization. Doing so will help the GDPR team understand how the data is collected, stored, transmitted, accessed, and secured, and understand where and how it is passed on to any third parties. 

Review how your organization collects consent from individuals to process their personal data

EU citizens must be able to give and rescind consent for their personal data to be processed. Consent means any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

In a contractual situation, the provision of a service may require personal data to be processed in order for the service to function correctly. In this case, this has to be made clear to the data subject when they register for the service.

Identify partner and supplier risk

Review third party legal agreements to ensure the EU citizen’s personal data provided to a third party is handled in a compliant manner. Otherwise, your organization will be held accountable for vendors’ data breaches or a data loss scenario. If you process personal data on behalf of another organization, you will need to demonstrate your compliance with GDPR, and ensure your legal agreements reflect this accordingly.

Ensure your cybersecurity programs are up to par

Your security posture and processes impact the journey and security of personal data, and should be assessed accordingly. GDPR Article 32 stipulates that you must ensure a level of security appropriate to the risk involved with the data. This might require adjustments to your security program, especially if you have weighted your security setup to focus primarily on prevention and are lighter in the areas of detection and correction. Visibility across your ecosystem is vital for determining risk. Knowing your weak points will help you understand where to bolster your security, and testing out your processes will determine whether they are fit for purpose.

Get regular updates on progress and status

As individual reviews are completed, have each leader report back to the core and leadership teams with a set of prioritized actions and milestones. Set up a frequent cycle of reporting to understand the progress of your GDPR compliance status. The spring of 2018 is clearly too late to be finding problems.

In conclusion

If your organization employs, partners with, or serves people who are citizens of the European Union, you are subject to GDPR. Given the detailed stipulations of the regulation, along with the threatening risk of steep fines, it’s not something you can get away with ignoring or procrastinating. As a board member, you’ll want to ensure the organizations you serve are prepared to meet the challenge and reduce the risk.

Corey E. Thomas is president and CEO of Rapid7. He is director of Blue Cross Blue Shield of Massachusetts and the Greater Boston Chamber of Commerce.