Category: Risk Management

Avoid Deal Failure: Ask These Tough Questions Before Any Acquisition

Published by
Justin Johnson

Justin Johnson

It is easy to get caught up in the excitement of a deal—the unvarnished optimism of the corporate development team, the bullish spreadsheets from the bankers, the juicy steaks at the closing dinner. The numbers, however, don’t lie. It is estimated that at least half of all merger and acquisition (M&A) deals ultimately fail, destroying shareholder value for the acquirer instead of increasing it. A disciplined valuation analysis—ideally conducted with minimal involvement of the deal team and bankers—can help board members avoid unsuitable matches and support deals that are a good long-term fit.

A Synergistic Match

Assume your company has identified an acquisition target operating in your business and serving similar customers. Cost savings from the combination are expected as the result of an overlapping distribution network and because redundant production and administrative staff can be eliminated. This is a classic synergistic deal, where the acquirer boosts overall profit by adding the target’s revenue to its topline while eliminating many costs associated with achieving that revenue.

The first step in evaluating such a transaction is establishing the market value of the target without regard to buyer-specific synergies. While acquirers are usually most interested in the valuation of the combined company, there are good reasons for first establishing a baseline market valuation of the target on a stand-alone basis:

  • It gives the buyer insight on a valuation the target might expect to receive in the deal.
  • It provides a reference point the buyer can use to evaluate how much synergy it brings to the table.

Determining Baseline Value                             

There are several common approaches for deriving the market value of an acquisition target, and an acquirer should undertake as many of them as possible to establish a baseline valuation matrix. The two common techniques for publicly traded entities are straightforward. They entail analyzing the target’s historical stock price and the premium at which its stock trades after the deal is announced. For our purpose, assume the target is not public and review the four valuation approaches commonly applied to private companies.

  1. One of the most common techniques is by referencing the trading multiples of comparable publicly-traded companies. Care is required in the selection of comparable public companies to ensure similarity of operations, size, and growth prospects with the target company.
  2. Another common method is to consider recent M&A deal multiples for similar companies. For this approach, make sure to distinguish between financial sponsor deals and strategic deals, as strategic deals frequently pay higher multiples due to acquirer-specific synergies. Value indications from these approaches entail applying observed market multiples to the target’s standalone earnings, typically before interest, tax, depreciation, and amortization (EBITDA).
  3. If a long-term forecast is available for the target, financial advisors sometimes use a discounted cash flow (DCF) analysis. It should be stressed, however, that this analysis is only as accurate as the underlying forecast, which may be suspect. For this reason, a DCF analysis often is underweighted—and sometimes omitted altogether—from a valuation exercise. Additionally, a “haircut” may be applied to the forecast itself before it is put into the model.
  4. Finally, if the target is likely to attract financial buyers, advisors may employ a leveraged buyout (LBO) analysis. This approach values the target by establishing what a financial buyer would be willing to pay for the company under the financing structure it might be expected to use—often a combination of debt and equity. If a company is underperforming its peers, the LBO model may also include some assumptions about reorganization and/or add-on acquisitions.

Once as many of the preceding approaches as practicable have been performed, financial advisors triangulate the various pricing indications to establish a baseline market valuation range for the target.

Establishing Pro Forma Value

The next step is assessing the value of the acquirer after acquisition. This analysis is different than the market valuation analysis because it factors in synergies to show the value of the acquisition to that specific buyer. A word of caution: Board members should be wary of synergy projections from bankers or corporate development personnel who are emotionally or financially invested in the deal. Considering the stakes, engaging an outside advisor not connected to the prospective transaction to provide an independent valuation and estimate the potential synergies can be a sensible course of action.

No matter who is performing the pro forma analysis, a number of factors should be evaluated: the amount of expected synergies, the costs associated with realizing those synergies, the amount and type of purchase consideration, and the trading multiples for the acquirer’s stock.

Even for a disinterested third party, it is challenging to estimate synergies with accuracy, so it is prudent to perform a sensitivity analysis of the transaction’s impact on the acquirer’s share price. This is best revealed in a sensitivity table that varies both the amount of assumed synergies and the purchase consideration. Layering in an additional variable to the sensitivity analysis, the estimated one-time integration costs incurred to achieve synergies can further enhance precision. These costs can be just as difficult to project as synergies, so a range of estimates is appropriate.

The resulting sensitivity table can provide board members a powerful visual tool to understand how much it makes sense to pay at varying levels of synergy and costs. If the resulting analysis shows that a deal increases shareholder value—even if actual synergies realized are at the low end of expectations and one-time costs incurred to realize those synergies are at the high end—the deal likely will turn out well from the acquirer’s standpoint. An even better deal is one that increases shareholder value if synergies are below the low end of the estimated range and integration costs are above the high end.

Conversely, deals that are only accretive at or near the most favorable ends of the two ranges are likely to destroy shareholder value.

Other Impacts on Value

What about the impact of the type of purchase consideration on value? An acquisition can be financed with available cash, new debt, stock, or some combination of these. Debt financing will create a drag on future earnings in the form of interest expense, another cost of realizing synergies that must be considered. If acceptable to the seller, using stock may be advantageous to the buyer.

A final factor to consider is the valuation multiple of the acquirer. If historically it has been somewhat volatile, it is a good idea to run a sensitivity analysis on the pro forma value of the stock, assuming a range of valuation multiples for the acquirer consistent with its recent trading history. The lower the valuation multiple, the lower the increase in value from transaction synergies.

Know the Difference

Board members are unlikely to bless a strategic acquisition with the intent to destroy value. Yet, too often, that is exactly what ends up happening. A disciplined, thorough, and independent valuation analysis can make the difference in helping a board distinguish a suitable match from a bad one. After establishing both the market value of the target and its pro forma value to a particular acquirer, a buyer is well-positioned to negotiate and—if all goes well—finalize the deal.

Justin Johnson is co-CEO of Valuation Research Corp. where he sits on the firm’s board and is a member of the firm’s Private Equity Industry Group and Financial Opinions Committee. Prior to joining VRC, Johnson held positions with Arthur Andersen, Merrill Lynch, and PricewaterhouseCoopers.

The Role of Software Patches in Cyber-Risk Mitigation

Published by
Jim DeLoach

Jim DeLoach

Equifax is not just another organization that was breached. The company was named one of Forbes’ “World’s 100 Most Innovative Companies” for three years straight, from 2015 to 2017. The recent breach of the company’s U.S. online dispute portal web application has raised serious questions about whether boards of directors and senior management are asking the right questions about actions their organizations are taking to protect themselves from cyberthreats. Are boards probing to discover what they don’t know?

In September, Equifax announced a massive breach exposing the personal information of over 40 percent of the U.S. population. The company’s stock declined almost 14 percent after the announcement, and heads rolled over the ensuing three weeks—first the chief information officer (CIO) and chief information security officer (CISO), and then the CEO. The pervasive headline effect of this incident has been as persistent as any in memory.

There are many important aspects of cybersecurity that the board is expected to tend to, including understanding what the organization’s “crown jewels” are, business outcomes management seeks to avoid, understanding the ever-changing threat landscape, and having in place an effective incident response program, to name a few.

But this discussion is more specifically about the systems vulnerabilities we know about. That’s the elephant in the room.

The sage advice—if your flank is exposed, fortify it before you get overrun—seems to apply here. Even noncombatants understand the value of protecting exposed flanks in desperate battle. A known vulnerability is most certainly an exposed flank, particularly when sensitive data is involved.

Enter the role of software patches.

A patch is a software update installed into an existing program to fix new security vulnerabilities and bugs, address software stability issues, or add a new feature to improve usability or performance. Often a temporary fix, a patch is essentially a quick repair. While it’s not necessarily the best solution to address the problem, it gets the job done until product developers design a better solution for a subsequent product release.

The Equifax incident raises the question as to why the company didn’t implement the appropriate patch to its systems when the vulnerability was first identified. To be fair, other companies have suffered a cybersecurity event because they failed to implement a patch in a timely manner, and we have no insights into the unique circumstances at Equifax. Admittedly, patching software at a large organization with multiple, complex systems takes a considerable amount of time. But, for boards and executive teams everywhere, the Equifax episode serves as a stark reminder of the importance of understanding the company’s cybersecurity strategy and tactics to pinpoint whether they know what they need to know.

Often, in our security and privacy consulting business at Protiviti, we see companies implementing patches within 60 to 90 days of discovering a systems vulnerability. We have seen some high-risk patches not applied at all for fear of breaking legacy applications; in effect, the organization simply accepts the risk of not applying these patches and, as an alternative, works to mitigate it. Based on our experience, 30 days from release to deployment is typically the “gold standard” for the time it takes apply a patch.

Is the gold standard enough? Companies are essentially leaving themselves exposed for 30 days. Meanwhile, they may lack the advanced detection and response capabilities to detect unauthorized activity occurring during that time. Organizations with a well-designed vulnerability management program quickly patch known vulnerabilities for critical public-facing services. For example, we see companies setting service level agreement targets of 72 hours, with some striving for 24 hours or less to limit the damage of an attack.

Simply stated, boards need to inquire as to the target duration from release to deployment to shore up cybersecurity vulnerabilities and, if it’s 30 days (or more), question whether that is timely enough, especially when public-facing systems are involved and sensitive personal information is exposed. Today’s optics regarding egregious security breaches, corporate stewardship expectations, and the related impact on reputation and brand image cry out for this oversight.

It is vitally important to scan public-facing systems immediately upon notification of critical vulnerabilities; “same day” should be the target. In addition, patch deployment should be tracked and verified as part of a comprehensive information technology (IT) governance process. It’s not enough to merely push out a patch. A comprehensive IT governance process should confirm that the risk truly has been mitigated on a timely basis.

Directors and executives should also be concerned with the duration of significant breaches before they are finally detected. Our experience is that detective and monitoring controls remain immature across most industries, resulting in continued failure to detect breaches in a timely manner. Given the increasing sophistication of perpetrators, simulations of likely attack activity should be performed periodically to ensure that defenses can detect a breach and security teams can respond timely.

We know that an organization’s preparedness to reduce an incident’s impact and proliferation after it begins is an issue (i.e., the lapsed time between the inauguration of an attack and its detection is too long). Often, it takes over 100 days until suspicious activity is discovered; about 50 percent of the time, organizations learn of breaches through a third party.

In nearly every penetration test Protiviti conducts, the client authorizing the test fails to detect our test activity. Many organizations seem to think that if they outsource to a managed security service provider (MSSP), the problem will be solved —as if a box has been checked. However, we see time and again that this is not the case. Often, there are breakdowns in the processes and coordination between the company and the MSSP that result in attack activity occurring unnoticed. Not many organizations are focusing enough on this failure of detective controls to identify breach activity in a timely manner.

These two fronts—how long it takes to implement a patch, as well as detect a breach—inform the board’s cyber-risk oversight. Every organization should take a fresh look at the impact specific cybersecurity events can have and whether management’s response plan is properly oriented and sufficiently supported. For starters, directors should ensure they are satisfied with the elapsed time:

  • For patching identified system vulnerabilities;
  • Between the initiation of an attack and its ultimate discovery;
  • Between the discovery of a security breach and the initiation of the response plan to reduce its proliferation and impact; and
  • Between the discovery of a significant breach and the undertaking of the required disclosures to the public, regulators, and law enforcement in accordance with applicable laws and regulations.

Today’s optics regarding egregious security breaches, corporate stewardship expectations, and the related impact on reputation and brand image beg for careful oversight.

Boards Can Do More to Align on Cybersecurity

Published by

Organizational cybersecurity is one of the biggest challenges facing companies today. The most recent in a string of headline-grabbing data breaches involved U.S. credit-reporting company Equifax, an event that exposed the private information of some 143 million customers. Grilled on Capitol Hill about the episode, Equifax’s chair and CEO said that “mistakes were made” in the company’s response to the attack, which has prompted dozens of private lawsuits and precipitated a drop in the company’s share price.

As corporate directors are ultimately responsible for their companies’ future, the urgency to address cyber risk is accelerating. There is general agreement across the C-suite that cyber risk is a top priority, according to a recent Marsh global survey regarding corporate cyber risk perception. But survey results also revealed that there is less alignment inside companies regarding how cyber risk is reported to corporate directors and about what is most important.

The Information Disconnect Between Board and C-Suite

When survey respondents were asked what type of reporting on cyber risk the board of directors received, something surprising surfaced. For every type of report we asked about, respondents who indicated they were corporate directors said they received far less information than respondents from the C-suite said they were supplying to directors.

Click to enlarge in a new window.

For example, 18 percent of surveyed directors said they received information about investment initiatives for cybersecurity initiatives. Yet 47 percent of chief risk officers, 38 percent of chief technology or information officers, and 53 percent of chief information security officers said they were already providing reports to board members on investment initiatives.

Whether it’s optimizing risk finance though insurance or other resiliency measures, such investment initiatives are critical to preparing for an attack as well as to managing an incident. Organizations need to ensure that board members are receiving—and carefully reviewing—this vital information.

Tellingly, corporate directors say the type of cyber risk reporting they most often receive consists of briefings on “issues and events experienced.” It’s clearly important for any corporate director to learn about cybersecurity incidents that the company has faced, but it is an after-the-fact activity. There are a number of reasons for boards to be most cognizant of the material they receive regarding an event that has already happened.

Click to enlarge in a new window.

The survey’s C-suite respondents listed “cyber program investment initiatives” as the type of reporting their boards were most likely to be receiving. But with fewer than one-in-five corporate directors saying they received such reports, there is an issue that needs to be addressed, especially given that understanding—and directing—corporate investment in cybersecurity is a key to building effective resiliency measures.

No Incident Can Be Completely Avoided

Many boards seem to focus their oversight on security activities over resiliency best practices. For example, a high number of corporate directors in our survey said their organization did not have a cybersecurity incident response plan. Why? The top reason cited was that “cybersecurity/firewalls are adequate for preventing cyber breaches.” C-suite respondents did not share the same view.

Click to enlarge in a new window.

As firm after firm of all sizes and across geographies have fallen prey to attacks, the belief that one can have enough defenses in place to completely avoid a cybersecurity incident has been widely debunked by real-world events. Thus, the mantra among the organizations with the most sophisticated cyber-risk management programs is: “It’s not a matter of if you will be breached, but when.”

Cyber threats are constantly evolving and the potential threat actors are multiplying. No organization is impenetrable, no matter how strong their security posture may be.

Strong Companies Are Already Preparing for GDPR

One of our key findings regarding corporate readiness involves the lead-up to the EU’s General Data Protection Regulation (GDPR), which is scheduled to take effect in May 2018.

We found that companies that are already preparing for GDPR are doing more to address cyber risk overall than those that have yet to start planning. Survey respondents who said their organizations were actively working toward GDPR compliance—or felt that they were already compliant—were three times more likely to adopt overall cybersecurity measures and four times more likely to adopt cybersecurity resiliency measures than those that had not started planning for GDPR. This is happening despite the fact that the GDPR does not showcase a “prescriptive” set of regulations with a defined checklist of compliance activities. Instead, GDPR preparedness appears to be both a cause and consequence of overall cyber-risk management strength.

The most forward-looking corporate boards recognize the GDPR compliance process as an opportunity to strengthen their organizations’ overall cyber risk management posture on a much broader level, effectively transforming regulations that might previously have been viewed as a constraint as a new competitive advantage.

The lesson here—even for directors of organizations not subject to the GDPR—is that good cyber-risk oversight requires engaging on a number of fronts, both defensive and responsive. Whether it’s playing an active role in attracting highly-skilled talent, seeking cross-functional enterprise alignment on priorities, or viewing regulatory compliance as part of a holistic plan, an engaged board can make the critical difference in how a company assesses, reports on, and addresses the impact of cyber risk on the company.

To receive a copy of Marsh’s report, GDPR Preparedness: An Indicator of Cyber Risk Management, click here.