Category: Regulations & Legislation

The Auditor’s Report: Reading Between New Lines

Published by

Alexandra R. Lajoux

Now that the U.S. Securities and Exchange Commission (SEC) has released an order approving the Public Company Accounting Oversight Board’s (PCAOB) new rules on the auditor’s report, what items should the audit committee and shareholders look for there?

The Auditor’s Report on an Audit of Financial Statements When the Auditor Expresses an Unqualified Opinion and Related Amendments to PCAOB Standards, released by the PCAOB June 1 and approved by the SEC October 23, contains five main changes, including one that requires careful reading between the lines.

As NACD summarized in a recent brief to its members, the new PCAOB standard will require auditors to:

  • Standardize the format of the auditor’s report, placing the auditor’s opinion in the first section of the auditor’s report, followed by the basis for the opinion. This change makes the auditor’s opinion easier to find in the auditor’s report.
  • Disclose the auditor’s tenure, stating when the audit firm began its current service to the company. This new requirement comes in lieu of limiting audit firm tenure through mandatory audit firm rotation, a concept NACD and others have rejected in the past.
  • State that the auditor is required to be “independent.” This requirement is intended to strengthen shareholder confidence in the auditor’s report, possibly as an offset to the tenure disclosure, if it reveals that the auditor has been serving the client for more than a quarter century, for example.
  • State that the financial statements are free from material misstatements “whether due to error or fraud.” This change aligns with other recent or pending regulations on error vs. fraud, such as the proposed executive pay clawbacks rule still pending under Dodd-Frank, which mandated disgorgement of performance-based pay after financial restatements even if restatements were due to error rather than to fraud.

Report on critical audit matters (CAMs), defined as “matters communicated or required to be communicated to the audit committee and that: (1) relate to accounts or disclosures that are material to the financial statements; and (2) involved especially challenging, subjective, or complex auditor judgment.” A number of commenters said that the CAMs mandate is “redundant” with existing reports, which already reveal the required information. See for example NACD’s comment to the PCAOB or State Street’s comment.

The key letter in CAM is M, for material. For those who may wonder what may be “material” to the financial statements, join the club. The SEC has still never defined this term, leaving this job to the courts as they interpret federal securities laws.

The going definition of “material” is more than 40 years old. The SEC release cites TSC Industries v. Northway, Inc., 426 U.S. 438, 449 (1976), in which the U.S. Supreme Court states that a fact is material if there is “a substantial likelihood that the . . . fact would have been viewed by the reasonable investor as having significantly altered the ‘total mix’ of information made available.” In that same case, the Supreme Court said that determining materiality requires “delicate assessments of the inferences a ‘reasonable shareholder’ would draw from a given set of facts and the significance of those inferences to him . . .”

Such wisdom is not lost on the PCAOB and SEC. In its June 1 release, the PCAOB cites as CAMs the auditor’s evaluation of the company’s “goodwill impairment assessment” and, more broadly, the auditor’s assessment of the company’s “ability to continue as a going concern.” These two examples are material to financial statements. By contrast, the following two examples are not material to the financial statement: a loss contingency already discussed with the audit committee and “determined to be remote;” and a “potential illegal act.”

Audit committees need to ensure that their auditors are in a position to recognize critical audit matters, and to learn from those matters.  But this does not mean looking for problems where there are none.

Significantly, SEC Chair Jay Clayton had this to say about the new standard:

“I would be disappointed if the new audit reporting standard, which has the potential to provide investors with meaningful incremental information, instead resulted in frivolous litigation costs, defensive, lawyer-driven auditor communications, or antagonistic auditor-audit committee relationships — with Main Street investors ending up in a worse position than they were before.

I therefore urge all involved in the implementation of the revised auditing standards, including the Commission and the PCAOB, to pay close attention to these issues going forward, including carefully reading the guidance provided in the approval order and the PCAOB’s adopting release.”

To Chairman Clayton’s point, the SEC makes this point in its approval order:

“As the [PCAOB] notes, in order to succeed, any claim based on these new statements would have to establish all of the elements of the relevant cause of action (e.g., when applicable, scienter, loss causation, and reliance). Moreover, as discussed above, CAMs could be used to defend as well as initiate litigation. …However, because of these risks and other concerns expressed by commenters, we expect the Board to monitor the Proposed Rules after implementation for any unintended consequences.“  (SEC approval order , pp. 32–33)

Shareholders and others should read between the lines of auditor’s report (appreciating the regulations behind it), but they should not expect auditors to “look under rocks” to find problems. That is the job of management, internal control, and the audit committee. The auditor’s job is to focus on the audit of the financial statements to ensure that they conform to generally accepted accounting principles (GAAP). Given the complexity of GAAP, that is a big enough job as it is.

The CAM standard can’t be mastered overnight and won’t be required any time soon. Auditors of large accelerated filers will not be required to adopt CAM changes until audits of fiscal years ending on or after June 30, 2019—with audits of all remaining filers to adopt CAM changes for fiscal years ending on or after December 15, 2020.

By contrast, all the other changes will apply to audits of fiscal years ending on or after December 15, 2017.  That mean, essentially that auditors must work on this immediately, since most companies they are working with right now have fiscal years ending December 31, 2017. (According to Audit Analytics, 71 percent of public companies have a fiscal year ending December 31.)

So now is the time to prepare for the changes! In its above-cited report on the new rule, NACD prepared questions for directors to ask, along with related resources.

Questions for Boards

  • For which fiscal year will our auditor first be required to report on CAMs?
  • What areas during the audit do we anticipate our auditor will find challenging, subjective, or complex—and how can we preemptively address those concerns?
  • How will the auditor’s insights in the newly expanded report affect our ongoing work as we prepare the audit committee report for the proxy and review risk disclosures in the annual report on Form 10-K?
  • How will it shape our meeting with auditors, who themselves have extensive standards for their communications with audit committees?
  • How might our company need to adjust our year-end reporting calendar in order to file the 10-K on time?

NACD Resources: See NACD’s commentary on this topic to the PCAOB in the Corporate Governance Standards Resource Center, and visit NACD’s Audit Committee Resource Center for a repository of content related to leading practices for the audit committee. Register for the KPMG webinar “What You Need to Know About the New Auditor Reporting Model” on Thursday, November 9, and review the Center for Audit Quality’s recent alert “The Auditor’s Report—New Requirements for 2017.”

Hacking Back Will Hold Companies Back

Published by

Corey E. Thomas

Undergraduate, graduate, and professional students of cybersecurity from around the world gathered earlier this year to participate in a cybersecurity competition that simulated the international policy challenges associated with a global cyberattack. While the goal was to practice sound policy decisions, the majority of competing teams unintentionally led the U.S. into starting an international war. Given a variety of diplomatic and other means of responding to cyberattacks, participants largely took the aggressive approach of hacking back in response to cyberattacks from China, and to disastrous consequences.

While the competition’s participants are all students today, they may well go on to be corporate directors and government leaders of tomorrow. Based on current debate about how organizations in the private sector should respond to cyberattacks, it seems the actions taken by these students may well be representative of a broader trend. In fact, there is enough of a push for organizations to be legally authorized to “hack back” that earlier this year a member of Congress proposed a bill to empower people “to defend themselves online, just as they have the legal authority to do during a physical assault.”

As a business leader, I believe this measure would do more harm than good.

What Is Hack Back?

Hack back, which is sometimes called counterstrike, is a term used to refer to an organization taking offensive action to pursue, and potentially subdue, cyberattackers that have targeted them. For the purposes of this article, I am specifically talking about action taken by private sector organizations that affects computers external to their own network. We are not discussing government actions, which tend to occur within existing legal frameworks and are subject to government oversight.

Hack back activities go beyond defensive measures that organizations may put in place to protect their environments. It is generally understood that hack back activities extend beyond the victim’s own network, systems, and assets, and may involve accessing, modifying, or damaging computers or networks that do not belong to the victim. Directors should note that today it is illegal under the Computer Fraud and Abuse Act for private parties to access or damage computer systems without authorization from the technology owners or an appropriate government entity, even if these systems are being used to attack you. That is what proponents of hack back want to change, and the proposed bill goes some way towards doing this.

The Case for “Self Defense”

In response to the legal restriction, proponents of a law to legalize hacking back at cyberattackers often argue that the same principle should apply as that which allows US citizens to defend themselves against intruders in their homes—even with violent force. While it may sound reasonable to implement equal force to defend a network, the Internet is a space of systems designed specifically for the purpose interacting and communicating. Technology and users are increasingly interconnected. As a result, it’s almost impossible to ensure that defensive action targeted at a specific actor or group of actors will only affect the intended targets.

The reality of the argument for hacking back in self-defense is unfortunately more akin to standing by your fence and lobbing grenades into the street, hoping to get lucky and stop an attacker as they flee. With such an approach, even if you do manage to reach your attacker, you’ll almost certainly cause terrible collateral damage. Can your organization afford to clean up such a mess? What would be the repercussions for your reputation and position in the marketplace?

Blame Game

Another significant challenge for private sector organizations looking to hack back is that, unlike governments, they typically do not have the large-scale, sophisticated intelligence gathering programs needed to accurately attribute cyberattacks to the correct actor. Attackers constantly change their techniques to stay one step ahead of defenders and law enforcement, including leveraging deception techniques. This means that even when there are indications that point to a specific attacker, it is difficult to verify that they have not been planted to throw off suspicion, or to incriminate another party.

Similarly, it is difficult to judge motivations accurately and to determine an appropriate response. There is a fear that once people have hack back in their arsenal, it will become the de facto response rather than using the broad range of options that exist otherwise. This is even more problematic when you consider that devices operating unwillingly as part of a botnet may be used to carry out an attack. These infected devices and their owners are as much victims of the attacker as the primary target. Any attempt to hack back could cause them more harm.

The Security Poverty Line

Should hack back be made a lawful response to a cyberattack, effective participation is likely to be costly, as the technique requires specialized skills. Not every organization will be able to afford to participate. If the authorization framework is not stringent, many organizations may try to participate with insufficient expertise, which is likely to be either ineffective or damaging, or potentially both. However, there are other organizations that will not have the maturity or budget to participate even in this way.

These are the same organizations that today cannot afford a great deal of in-house security expertise and technologies to protect themselves, and currently are also the most vulnerable. As organizations that do have sufficient resources begin to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift towards targeting the less-resourced organizations that reside below the security poverty line, increasing their vulnerability.

A Lawless Land

Creating a policy framework that provides sufficient oversight of hack-back efforts would be impractical and costly. Who would run it? How would it be funded? And why would this be significantly more desirable than the status quo? When the U.S. government takes action against attackers, they must meet a stringent burden of proof for attribution, and even when that has been done, there are strict parameters determining the types of targets that can be pursued, and the kind of action that can be taken.

Even if such a framework could be devised and policed, there would still be significant legal risks posed to a variety of stakeholders at a company. While the Internet is a borderless space accessed from every country in the world, each of those countries has their own legal system. Even if an American company was authorized to hack back, how could you ensure your organization would avoid falling afoul of the laws of another country, not to mention international law?

What Directors Can Do

The discussion around hacking back so far has largely been driven by hyperbole, fear, and indignation. Feelings of fear and indignation are certainly easy to relate to, and as corporate directors, powerlessness does not sit well with us. It is our instinct and duty to defend our organizations from avoidable harm.

The potential costs of a misstep or unintended consequences from hack back should deter business leaders from undertaking such an effort. If another company or a group of individuals is affected, the company that hacked back could see themselves incurring expensive legal proceedings, reputational damage, and loss of trust by many of their stakeholders. Attempts to make organizations exempt from this kind of legal action are problematic as it raises the question of how we can spot and stop accidental or intentional abuses of the system.

It’s one thing for students to unintentionally trigger war in the safe confines of a competitive mock scenario, and another thing entirely to be the business leader that does so in the real world. Directors of companies must instead work together to find better solutions to our complex cybersecurity problems. We should not legitimize vigilantism, particularly given the significant potential risks with dubious benefits.

 

Corey Thomas is CEO of Rapid7. All opinions expressed here are his own.

The Corporate Director’s Guide to GDPR

Published by

On May 25, 2018, a major new piece of data protection regulation will come into effect across the European Union (EU), and with it comes the potential for hefty fines or penalties for your organization. Even if you do not directly operate in the EU, chances are that the General Data Protection Regulation (GDPR) still pertains to your company.

Corey Thomas

The regulation covers any entity that processes the personal data of EU citizens (referred to as “data subjects”), even if the organization does not provide goods or services to EU citizens and only handles or processes their data. Unless you are categorically sure that your organization does not and will not process EU citizens’ personal data, compliance is not optional.

The fine for an infringement can be €20 million (approximately $23 million at today’s exchange rate), or 4 percent of your worldwide annual turnover, depending on which is the higher amount. It is essential for directors to pay attention to the data and information security practices in place to ensure that the organization is prepared and compliant.

The Policy Details of GDPR

The GDPR was written to ensure that organizations:

  • protect the personal data of ‘EU Natural Persons’ (i.e. living people);
  • are transparent, fair, and lawful about the processing of personal data;
  • only request and process necessary personal data;
  • do not share data with third parties or countries unless the correct legal agreements and processes are implemented; and
  • gain consent from data subjects to process their data.

Personal data is defined in the policy as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”

There are six principles that apply to the processing of personal data. According to the policy, personal data shall be:

  • processed lawfully, fairly, and in a transparent manner;
  • collected for specified, explicit, and legitimate purposes;
  • adequate, relevant, and limited to what is necessary;
  • accurate and, where necessary, kept up to date;
  • kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; and
  • processed in a manner that ensures appropriate security of the personal data.

Data subjects are provided with a set of legal rights under GDPR, including the right:

Each EU member state has a designated supervisory authority. These regulatory bodies are responsible for monitoring the application of GDPR, and have the power to audit organizations and determine relevant warnings, reprimands, and fines for violations of the organization. When breaches of personal data occur, companies will be subject to a high level of scrutiny, and will have only a 72-hour window to report on the breach. A personal data breach is described as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.”

There is a requirement for some organizations to appoint a data protection officer (DPO), whose responsibility it is to advise and inform on GDPR and to monitor compliance within the organization. The DPO acts as the main contact for both data subjects and the supervisory authority, must report to the highest level of management within the organization, and cannot perform any tasks or duties which result in a conflict of interest.

You need to ensure your organization has fully investigated the nuances of the requirements to ascertain whether you need to appoint such a role or prepare to meet other personnel or technical demands.

Where do we start?

Your organization first needs to define the team that will drive GDPR compliance and management. Within the C-suite this should include the chief information officer and the chief information security officer, in addition to representatives from legal counsel, human resources, risk and compliance, and privacy. Determine if you need to appoint a DPO. Once your team is assembled, assess your current state, so that you can plan next steps accordingly. This team should present results at least to your board’s audit committee, if not the full board, given the financial and reputational risks involved.

Understand your personal data retention

You should ask your GDPR team the following questions to determine what categories of personal data your organization is dealing with:

  • To whom does data you collect and retain pertain?
  • Is it necessary to collect and keep this data?
  • If so, how long do you need to keep it?
  • Do you have permission from the data subject to process the data?
  • How is consent obtained from data subjects for each method of personal data collection?

Encourage your team to follow others’ personal data on its journey through and beyond the organization. Doing so will help the GDPR team understand how the data is collected, stored, transmitted, accessed, and secured, and understand where and how it is passed on to any third parties. 

Review how your organization collects consent from individuals to process their personal data

EU citizens must be able to give and rescind consent for their personal data to be processed. Consent means any freely given, specific, informed, and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”

In a contractual situation, the provision of a service may require personal data to be processed in order for the service to function correctly. In this case, this has to be made clear to the data subject when they register for the service.

Identify partner and supplier risk

Review third party legal agreements to ensure the EU citizen’s personal data provided to a third party is handled in a compliant manner. Otherwise, your organization will be held accountable for vendors’ data breaches or a data loss scenario. If you process personal data on behalf of another organization, you will need to demonstrate your compliance with GDPR, and ensure your legal agreements reflect this accordingly.

Ensure your cybersecurity programs are up to par

Your security posture and processes impact the journey and security of personal data, and should be assessed accordingly. GDPR Article 32 stipulates that you must ensure a level of security appropriate to the risk involved with the data. This might require adjustments to your security program, especially if you have weighted your security setup to focus primarily on prevention and are lighter in the areas of detection and correction. Visibility across your ecosystem is vital for determining risk. Knowing your weak points will help you understand where to bolster your security, and testing out your processes will determine whether they are fit for purpose.

Get regular updates on progress and status

As individual reviews are completed, have each leader report back to the core and leadership teams with a set of prioritized actions and milestones. Set up a frequent cycle of reporting to understand the progress of your GDPR compliance status. The spring of 2018 is clearly too late to be finding problems.

In conclusion

If your organization employs, partners with, or serves people who are citizens of the European Union, you are subject to GDPR. Given the detailed stipulations of the regulation, along with the threatening risk of steep fines, it’s not something you can get away with ignoring or procrastinating. As a board member, you’ll want to ensure the organizations you serve are prepared to meet the challenge and reduce the risk.

Corey E. Thomas is president and CEO of Rapid7. He is director of Blue Cross Blue Shield of Massachusetts and the Greater Boston Chamber of Commerce.