Tag Archive: Nonfinancial Metrics

Getting the Right Cybersecurity Metrics and Reports for Your Board

Published by

In the 2017–2018 NACD Public Company Governance Survey, 22 percent of corporate directors said they were either dissatisfied or very dissatisfied with the quality of cybersecurity information provided by management.

We’re not surprised. In most cases, management still reports on cybersecurity with imprecise scorecards like red-yellow-green “heat maps,” security “maturity ratings,” and highly technical data that are out of step with the metric-based reporting that is common for other enterprise reporting disciplines.

Boards deserve better. We recognize that cybersecurity is a relatively young discipline, compared to others under the umbrella of enterprise risk management (ERM). But it’s not a special snowflake. Management can and should deliver reports that are:

  • Transparent about performance, with economically-focused results based on easily understood methods.
  • Benchmarked, so directors can see metrics in context to peer companies or the industry.
  • Decision-oriented, so the board can provide oversight of management’s decisions, including resource allocation, security controls, and cyber insurance.

While that level of reporting may still be aspirational for some companies, directors can drive their organizations forward by asking the following five questions, and demanding answers backed by the sorts of metrics and reports that we suggest below.

Before we get to the questions, there’s an over-arching prerequisite for sensible reporting: Every key performance and risk indicator should be tracked against a target performance or risk appetite, respectively.

That means defining risk tolerances in an objective, clear, and measurable way—for instance, “our critical systems downtime should always be less than one percent”—so that an analyst’s gut feelings aren’t determining results.

1. What is the threat environment that we face?

The chief information security officer or chief risk officer should paint a picture of the threat environment (cybercriminals, nation-states, malicious insiders, etc.) that describes what’s going on globally, in our industry, and within the organization. Examples of good metrics and reports include:

  • Global cyber-related financial and data losses
  • New cyber breaches and lessons learned
  • Trends in ransomware, zero-day attacks, and new attack patterns
  • Cyber threat trends from ISACs (information sharing and analysis centers)

2. What is our cyber-risk profile as defined from the outside looking in?

Boards should get cyber-risk assessments from independent sources. Useful sources of information include:

  • Independent security ratings of the company, benchmarked against peers
  • Third-party and fourth-party risk indicators
  • Independent security assessments (e.g., external consultants and auditors)

3. What is our cyber-risk profile as defined by internal leadership?

Management should provide assessments with tangible performance and risk metrics on the company’s cybersecurity program, which may include:

  • NIST-based program maturity assessment
  • Compliance metrics on basic cyber hygiene (the five Ps): passwords, privileged access, patching, phishing, and penetration testing
  • Percentage of critical systems downtime and time to recover
  • Mean time to detect and remediate cyber breaches

4. What is our cyber-risk exposure in economic terms? Based on the company’s cyber-risk profile, the central question is: What is the company’s potential loss?

In the past 30 years, we have seen that question answered in economic terms in each and every risk discipline in ERM: interest rate risk, market risk, credit risk, operational risk, and strategic risk. Now we need to address that question for cyber risk. This expectation can also be found in the U.S. Securities and Exchange Commission’s new guidance on cybersecurity disclosures and its focus on quantitative risk factors.

The Factor Analysis of Information Risk (FAIR) methodology is a widely-accepted standard for quantifying cyber value-at-risk. The FAIR model provides an analytical approach to quantify cyber-risk exposure and meet the heightened expectations of key stakeholders.

In the current environment, directors should demand more robust reporting on metrics such as:

  • Value of enterprise digital assets, especially the company’s crown jewels
  • Probability of occurrence and potential loss magnitude
  • Potential reputational damage and impact on shareholder value
  • Costs of developing and maintaining the cybersecurity program
  • Costs of compliance with regulatory requirements (e.g., the EU’s General Data Protection Regulation)

5. Are we making the right business and operational decisions?

Cyber is not simply a technology, security, or even risk issue. Rather, it is a business issue and a “cost of doing business” in the digital economy. On the opportunity side, advanced technologies and digital innovations can help companies offer new products and services, delight their customers, and streamline or disrupt the supply chain. As a top strategic issue, management should provide the board with risk and return metrics that can support effective oversight of business and operational decisions, such as:

  • Risk-adjusted profitability of digital businesses and strategies
  • Return on investment of cybersecurity controls
  • Cyber insurance versus self-insured

We believe the number should be zero when it comes to the percentage of directors dissatisfied with the cybersecurity information provided by management. Based on our own observations of board reports on the quality of cybersecurity reporting, there remains significant gaps. We hope our article will serve as a framework for directors and executives to discuss ways to close those gaps.

Do Nonfinancial Measures Have To Be Soft?

Published by

Barry Sullivan

In a recent Harvard Business Review article, Graham Kenny posits that nonfinancial measures should be included alongside financial measures in incentive plans. He goes on to say that this is leading companies to use both hard and soft performance measures—where ‘soft’ measures can be more subjective in nature. We wholeheartedly agree with the premise—so much so that we wonder if Kenny goes far enough, particularly where subjectivity is concerned. For many, however, the element of subjectivity in this context implies an arbitrary assessment of performance against goals, based on the general sense of the board’s compensation committee.  This interpretation rightly makes institutional investors and other investors uneasy. But, does this really need to be the case, especially given the abundance of data in today’s digital age?

Seymour Burchman

We think there are ways to structure subjectivity such that the compensation committee’s performance assessments and incentive determinations make sense against the backdrop of company performance. Moreover, by bringing a clear structure and hard information to the more subjective elements of the incentive system, performance assessments and incentive determinations become more explainable, more powerful internally, and more defensible externally. We suggest an approach that would work as follows.

Define the performance to be measured as precisely as possible. We have used nonfinancial measures across a wide range of clients targeting a variety of strategic and operational areas. The goal in this step is to provide enough specificity at the board level so management can operationalize imperatives into specific, measurable key performance indicators (KPIs).

For example, one company was an end-to-end, integrated furniture manufacturer which covered product development, through the supply chain, manufacturing, retail sales, installation, and after-sales servicing. This company viewed the improvement of total customer experience as a critical strategic imperative in an increasingly competitive industry.

Consider potential sources of objective evidence and data. Preferably using a cross-functional team, determine KPIs to be tracked, sources of the KPI information, and favorable and unfavorable outcomes for each KPI. The KPI data could be sourced from: internal management information systems; Internet sources such as social media sites; sensors, which are becoming ever more prevalent in household goods, vehicles, industrial equipment; or tailored surveys conducted by or for the company.

For our furniture manufacturer, the board chose to use Net Promotor Score (NPS) as a key indicator. NPS could be benchmarked against key competitors and broader industry groups, and it tracked many of the key elements of total customer experience. The company then supplemented and validated this information with data collected from its own website and social media platforms as well as a few key Internet and social media sites that track customer satisfaction. It was recognized that the quality of data on these external sites can be open to question, so composite information and judgment are needed when using them. Favorable results were considered to be in the upper quartile vs. competitors, given the company’s premium pricing.

Build a scorecard. Use the evidence and data sources identified in the prior step to build a scorecard that can be measured quantitatively or with highly structured discretion. Such a scorecard is a useful tool for communicating with employee-participants, as well as external stakeholders (generally after the fact, to safeguard the company from competitive harm).

Put a range around the committee’s adjustments. Putting a fixed range around compensation adjustments makes the process more approachable and more doable versus using open-ended ranges. It also communicates to participants the importance of non-financial metrics by virtue of their potential impact on overall awards. In this case, the company allowed for a +/- 25 percent adjustment to the award.

By using this approach, executives have a better sense of focus areas and needed behaviors—as in, they know the rules of the game. And the compensation committee are more fully ‘in the seat’ when it comes time to judge performance. The committee also knows the rules of the game and, more pointedly, the committee knows the potential impact, or swing, its discretion can drive in the incentive outcome. Where discretion is left unstructured, we often see committees shying away from hard choices either out of concern for not having enough supporting information to make an informed decision or for fear of making too big an impact on the overall incentive outcome. Other important process points include: transparency, regular reporting on progress, careful consideration of unintended consequences, and openness to experimentation (e.g., implement the softer elements on a trial basis before including them in the formal incentive decision).

Consider the furniture manufacturer: the board and management built a program based on the principles outlined above that strikes the right balance of hard and soft performance measurement. And the softer, more subjective elements of the measurement system, by virtue of careful consideration and diligent information gathering, are anything but soft. Overall, the company’s incentive program is perceived as fair by employee-participants and investors, alike. And these key stakeholders also applaud how the system makes clear the company’s strategy and priorities for execution.

Barry Sullivan is a managing director at Semler Brossy.  Sullivan supports boards and management teams on issues of executive pay and company performance. He may be contacted at bsullivan@semlerbrossy.com. 

Seymour Burchman is a retired managing director at Semler Brossy. Burchman, who has been an executive compensation consultant for over 30 years, has consulted on executive pay and leadership performance for over 40 S&P 500 companies. He may be contacted at sburchman@semlerbrossy.com.