Why Involve a Financial Expert in Divorce Mediations

Most family law cases settle at mediation or prior to trial. For example, Tennessee requires that parties must attempt to settle their cases at mediation prior to granting a trial date. Considering both of these facts, when should a family law attorney involve a financial expert in divorce mediations?

Most family law cases that require the use of a financial expert share some combination of the following: a high-dollar marital estate, complex financial issues present, business valuation(s) performed, and/or the need for tracing/classification of certain types of marital and separate assets. Of the family law cases that settle at mediation, most include motivated parties with experienced attorneys that have entered the mediation process properly organized and prepared to negotiate the various financial and parental aspects of the case.

How a Financial Expert Can Assist a Family Law Attorney and Client at Mediation

Depending on numerous factors, attorneys require attendance of their financial expert for either the full mediation or for a particular session of the mediation. In addition, sometimes financial advisors are required to be available by telephone should issues arise. While having your financial advisor involved in the mediation in this way can be costly, a talented financial expert provides benefits to the client and the overall process to aid in its success. This author has participated in divorce mediations as a financial expert many times over the years and, as a result, has identified five ways a financial expert can be helpful to a family law attorney and client during mediations.

Communicates Complex Financial Theory in an Understandable Way

Your financial expert may have performed a business valuation that resulted in a report or some communication of value conclusions. An experienced financial expert that can communicate those conclusions and other complex financial issues in a clear and understandable manner to the client and the mediator is a priceless asset for your team. Because of this, often during mediation, that expert’s role evolves from a valuation vendor to a trusted advisor. The mediation process can be lengthy and includes significant down time where the attorney, client, and financial expert sit around the table together. It is during this time that the financial expert truly becomes a trusted advisor to the client and their attorney by providing data and expertise to assist in the decision-making process.

Defends the Business Valuation

If a case involves a business valuation, there is usually contention around the value of the business. Often, valuation experts from each side are present at mediation and have the opportunity to speak to each other regarding their assumptions and disagreements on conclusions. A good financial expert helps quantify and elaborate on the key issues or differences in the valuations to the mediator to help bridge the gap in negotiations.

Helps with Asset Division

Property division is one of the crucial issues that must be solved for a mediation to be successful. Property division is often thought of as a puzzle, putting pieces together based on value, transferability, and the motivations/desires of each party to own certain assets. While the attorneys have compiled the marital estate, a competent financial expert assists with real-time decision-making and changing variables through the use of a dynamic model of the marital estate. The flexibility of a dynamic model allows for shifting assets/liabilities from one party’s column to the other or calculating an equalization payment due to the illiquidity and lack of transferability of certain items.

Provides Insight into Alimony Calculations

While financial experts don’t generally determine actual alimony amounts, they can assist clients and attorneys in understanding the amount, structure, and time value of the proposed alternatives. Often clients look for clarity in the amount either from the viewpoint of the payor (Can I afford to pay this monthly amount?) or from the viewpoint of the payee (Can I survive on this monthly amount?). Some structures of alimony also include accelerated amounts or prepayments of the entire amount. A financial expert aids in the decision-making by providing time value of money calculations to assist in the psychology of those financial decisions.

Performs Separate/Marital or Retirement Calculations

Financial experts often assist attorneys by performing tracing analyses and calculations to determine and/or quantify the separate and marital portion of certain assets. Assets often subject to dispute are retirement accounts that were owned prior to marriage. Some states, like Tennessee, recognize not only the balance of such accounts at the date of marriage, but also the appreciation of that amount during the marriage as separate assets. Financial experts are often asked to perform and explain these calculations at mediation to protect the integrity of the separate portion of those assets.

Conclusion

While the costs of mediation may be high, they pale in comparison to the costs of going to trial. Some states, including Tennessee, already require that cases attempt mediation, so why not head into mediation organized, prepared, and ready to do business? Consider involving a financial expert directly or indirectly to assist in that process and chances of settlement will certainly increase.

 

Goodwill Impairment Testing in Uncertain Times

The economic impact from the COVID-19 pandemic has been swift and unexpected. Just a few short weeks ago, the S&P 500 was at an all-time high and goodwill impairments were not a serious concern for most companies. However, between mid-February and the end of March, the S&P 500 declined by 25%. The Russell 2000 fell nearly 32% over the same period, and the negative shock to certain companies and sectors has been much worse.

Most financial professionals understand that goodwill impairment testing is typically performed annually, usually near the end of a Company’s fiscal year. In fact, many companies just completed an impairment test as of year-end 2019. But the unprecedented events precipitated by the COVID-19 pandemic now raise questions about whether an interim goodwill impairment test is warranted.

Do I Need an Impairment Test?

The accounting guidance in ASC 350 prescribes that interim goodwill impairment tests may be necessary in the case of certain “triggering” events. For public companies, perhaps the most easily observable triggering event is a decline in stock price, but other factors may constitute a triggering event. Further, these factors apply to both public and private companies, even those private companies that have previously elected to amortize goodwill under ASU 2017-04.

For interim goodwill impairment tests, ASC 350 notes that entities should assess relevant events and circumstances that might make it more likely than not that an impairment condition exists. The guidance provides several examples, including the following:

  • Changes in the macroeconomic environment, such as a deterioration in general economic conditions
  • Limitations on accessing capital, fluctuations in foreign exchange rates, or other developments in equity and credit markets
  • Industry and market considerations such as a deterioration in the environment in which an entity operates or an increased competitive environment
  • Declines in market-dependent multiples or metrics (consider in both absolute terms and relative to peers)
  • Changes in the market for an entity’s products or services, or a regulatory or political development
  • Cost factor considerations such as increases in raw materials, labor, or other costs that have a negative effect on earnings and cash flows
  • Overall financial performance such as negative or declining cash flows or a decline in actual or planned revenue or earnings compared with actual and projected results of relevant prior periods
  • Entity-specific events (changes in management or key customers, contemplation of bankruptcy, adverse litigation or regulatory events)
  • Changes in the carrying amount of assets at the reporting unit including the expectation of selling or disposing certain assets
  • If applicable, a sustained decrease in share price (considered both in absolute terms and relative to peers)

The examples above are not all-inclusive and entities should consider other relevant events and circumstances that might affect the fair value or carrying amount of a reporting unit. An entity should place more weight on the events and circumstances that most affect a reporting unit’s fair value or the carrying amount of its net assets. The guidance notes that an entity should also consider positive and mitigating events and circumstances that may affect its conclusion. If a recent impairment test has been performed, the headroom between the recent fair value measurement and carrying amount could also be a factor to consider.

How an Impairment Test Works

Once an entity determines that an interim impairment test is appropriate, a quantitative “Step 1” impairment test is required. Under Step 1, the entity must measure the fair value of the relevant reporting units (or the entire company if the business is defined as a single reporting unit). The fair value of a reporting unit refers to “the price that would be received to sell the unit as a whole in an orderly transaction between market participants at the measurement date.”

For companies that have already adopted ASU 2017-04, the legacy “Step 2” analysis has been eliminated, and the impairment charge is calculated as simply the difference between fair value and carrying amount. Under the old framework, an additional “Step 2” analysis was performed and the impairment charge was based on the amount by which carrying amount exceeded the implied value of goodwill.

ASC 820 provides a framework for measuring fair value which recognizes the three traditional valuation approaches: the income approach, the market approach, and the cost approach. As with most valuation assignments, judgment is required to determine which approach or approaches are most appropriate given the facts and circumstances. In our experience, the income and market approaches are most commonly used in goodwill impairment testing. In the current environment, we offer the following thoughts on some areas that are likely to draw additional scrutiny from auditors and regulators.

  • Are the financial projections used in a discounted cash flow analysis reflective of recent market conditions? What are the model’s sensitivities to changes in key inputs?
  • Given developments in the market, do measures of risk (discount rates) need to be updated?
  • If market multiples from comparable companies are used to support the valuation, are those multiples still applicable and meaningful in the current environment?
  • If precedent M&A transactions are used to support the valuation, are those multiples still relevant in the current environment?
  • If the subject company is public, how does its current market capitalization compare to the indicated fair value of the entity (or sum of the reporting units)? What is the implied control premium and is it reasonable in light of current market conditions?

At a minimum, we anticipate that additional analyses and support will be necessary to address these questions. The documentation from an impairment test at December 31, 2019 might provide a starting point, but the reality is that the economic landscape has changed significantly in the last three months.

Concluding Thoughts

Not all industries have been impacted in the same way and there will certainly be differences between companies. For public companies, it can be difficult to ignore the significant drop in stock prices and the implications that this might have on fair value. For private businesses, even if a triggering event has not arisen yet, the deteriorating economic environment may just push the triggering factors into the second or third quarter of the year.

At Mercer Capital, we have experience in implementing both the qualitative and quantitative aspects of interim goodwill impairment testing. To discuss the implications and timing of triggering events, please contact a professional in Mercer Capital’s Financial Statement Reporting Group.

Family Culture And Dividend Policy

A presentation by Mercer Capitals’, Travis W. Harms, CFA, CPA/ABV, that provides an overview of the economic benefits of owning shares in a family business.

Community Bank Valuation (Part 5): Valuing Controlling Interests

To close our series on community bank valuation, we focus on concepts that arise when evaluating a controlling interest in another bank, such as arises in an acquisition scenario.  While the methodologies we described with respect to the valuation of minority interests in banks have some applicability, the M&A marketplace has developed a host of other techniques to evaluate the price to be paid, or received, in a bank acquisition.

In the Valuing Minority Interests segment of this series, we discussed that valuation is a function of three variables:  a financial metric, risk, and growth.  From a buyer’s standpoint, the ultimate goal of a transaction, of course, is to enhance shareholder value, which would occur if the target entity can, on balance, enhance (or at least not detract from) the buyer’s financial metrics, risk, and growth.  This can be achieved in several ways:

  • The direct earnings contribution of the target, or the accretion to the buyer’s earnings per share if the consideration consists of the buyer’s stock. In a bank M&A scenario, this accretion often derives from cost savings resulting from eliminating duplicative branches, back office functions, and the like.
  • An acquisition can provide diversification benefits, such as different types of loans, additional geographic markets, or new funding sources. If these characteristics of the target reduce any concentrations held by the buyer, the acquirer’s overall risk may lessen.  However, numerous buyers have regretted entering lines of business or new markets via acquisition with which the buyer’s management team lacked the requisite familiarity.
  • Accessing new markets or lines or business lines through acquisition gives the buyer more “looks” at new customers and transactions. For many banks, moving the needle on asset size or growth means looking outwardly beyond its existing markets or products, and the needle moves faster with an acquisition strategy versus a de novo market expansion strategy.

These benefits are not without risks, though.  Some of the more significant acquisition risks include:

  • Credit surprises. One or two unexpected losses usually do not affect the underlying rationale for a transaction, although it may create some uncomfortable conversations with investors regarding the buyer’s due diligence process.  A more significant risk is that the buyer’s risk tolerance differs from the seller’s approach, leading to a potentially significant disruption to future revenues as risk appetites are synchronized.  However, credit surprises often cannot be detached from the prevailing economic environment.  In a post mortem, many transactions closed in the 2006 time frame look ill-advised given the subsequent financial crisis.  Ultimately, factors outside the buyer’s control may have the most impact on post-transaction credit surprises.
  • Cultural incompatibility. While sometimes difficult to detect from the outside, differences small and large between the cultures of the buyer and target can jeopardize the anticipated post-merger benefits.  More often than not, this is manifest in personnel issues.  Mergers are like chum in the water to competitors; buyers can expect competitors to look for any opening to attract personnel from the target bank.

Similarities to Valuations of Minority Interests

The previous installment of this series introduced the comparable company and discounted cash flow methods to bank valuations.  Both of these methods remain relevant in assessing a controlling interest in a bank, meaning an interest of sufficient size to dictate the direction of the bank.  Most often, controlling interest valuations arise in the context of an acquisition.

Comparable Transactions Method

In a controlling interest valuation, the comparable company method can be used.  However, the resulting values often would be adjusted by a “control premium”, which is measured by reference to the value of historical M&A transactions relative to a publicly-traded seller’s pre-deal announcement stock price.  This approach has the advantage of synchronizing the controlling interest valuation to current market conditions, which can be a drawback of the comparable transactions approach.

More often, though, the comparable company method morphs into the comparable transactions method in an M&A setting.  Comparable M&A transactions can be identified by reference to geography, asset size, performance, time period, and the like.  Ideally, the transactions would be announced close in proximity to the date of the analysis; however, narrowly defining the financial or geographic criteria may mean accepting transactions announced over a longer time period.  The computation of pricing multiples, such as price/earnings or price/tangible book value, is facilitated by the widespread data availability regarding targets and the straightforward deal structures that usually allow analysts to identify the consideration paid to the sellers.  That is, contingent consideration, like earn-outs, is rare.  However, deal values are not always publicly reported for transactions involving privately-held institutions.

While the comparable transactions approach is intuitive – by measuring what another buyer paid for another entity in an industry with thousands of relatively homogeneous participants – the most significant limitation of the comparable transactions method is created by market volatility.  Buyers’ ability to pay is correlated with their stock prices, and most bank M&A transactions include a stock component.  Deals struck at a certain price when bank stocks traded at 16x earnings would not occur at that same price if bank stocks trade at 12x earnings without crushing dilution to the buyer.  Thus, prices observed in bank M&A transactions need to be viewed in light of the market environment existing at the time of the transaction announcement data relative to the valuation date.

Discounted Cash Flow Method

We introduced the discounted cash flow method as a forward-looking approach to valuation reliant upon a projection of future performance.  In an M&A scenario, buyers usually start with the target’s stand-alone forecast, unaffected by the merger.  Acquirers then add layers to the forecast reflecting the impact of the transaction, such as:

  • Expense savings. In a mature industry, realization of cost savings typically is a significant contributor to the transaction economics, with buyers often announcing cost savings equal to 30% to 40% of the target’s operating expenses.  These are derived primarily from eliminating duplicative branches, back office functions, and the like.  As the expense savings estimates increase, there often is a rising risk of customer attrition, with cuts going beyond the back office into activities more noticeable to customers, like branch hours or staffing.

While buyers may expect a certain level of expense savings, it is not clear that buyers “credit” the seller with all of the expense savings the buyer takes the risk of achieving.  That is, the risk of achieving the expense savings effectively is split between the buyer and seller, with the favorability of the split in one direction or the other dictated by the negotiating power of the buyer and seller.

  • Revenue enhancements. Buyers may expect some revenue enhancements to occur from the transaction, such as if the buyer has a more expansive product suite than the target or a higher legal lending limit.  However, buyers often loathe to include these in transaction modeling, and revenue enhancements are seldom reported as a driver of the EPS accretion expected from a transaction.
  • Accounting adjustments. While fair value marks on assets acquired and liabilities assumed should not drive the economics of a transaction, they can affect the near-term earnings generated by the pro forma entity.  Therefore, buyers usually are keenly aware of the accounting implications of a transaction.

One advantage of a discounted cash flow approach is that it allows the buyer to evaluate, for a given price, the level of earnings contribution needed from the target to justify that price.  While if you torture the numbers long enough they will confess to anything, as a statistics professor of mine was fond of saying, buyers should not lose sight of the reality of implementing the modeled business strategies.

Additional Considerations

While the comparable transactions and discounted cash flow models crossover – no pun intended with another valuation approach we describe below – from a minority interest valuation environment, several valuation techniques are unique to M&A scenarios.

Tangible Book Value Earn-Back

After the financial crisis, investors became focused on the tangible book value per share earn-back period, sometimes to the point of seemingly ignoring other valuation metrics.  There are several ways to compute this, but the most common is the “crossover” method.  This requires two forecasts:

  • The buyer’s tangible book value per share, absent the acquisition
  • The buyer’s pro forma tangible book value per share with the target

The analyst then calculates the number of periods between (a) the current date and (b) the date in the future when pro forma tangible book value per share exceeds stand-alone tangible book value per share.  Ultimately, the earn-back period is driven by factors like:

  • The price/earnings or price/tangible book value multiples of the buyer’s stock relative to the multiples implied by the transaction value
  • The extent of the merger cost synergies

The tangible book value earn-back method also exacts a penalty for deal-related charges, as a higher level of deal charges extends the earn-back period.  From an income statement standpoint these charges often are treated as non-recurring and, in a sense, neutral to value.  However, these charges represent a real use of capital, which the TBV earn-back approach explicitly captures.

Investors often look favorably upon transactions with earn-back periods of fewer than three years, while deals with earn-back periods exceeding five years often face a chilly reception in the market.  The earn-back period often is the real governor of deal pricing in the marketplace, which investors often like because it overcomes some limitations posed by EPS accretion analyses.

Earnings per Share Accretion

As for the tangible book value per share earn-back period analysis, an EPS accretion analysis requires that the buyer forecast its EPS with and without the acquired entity.  EPS accretion simply is the change in EPS resulting from the transaction.  The attraction of this analysis lies in the correlation between EPS and value.  For a buyer trading at 12x earnings, a deal that is $0.10 accretive to EPS should enhance shareholder value by $1.20 per share, holding other factors constant.

But how much accretion is appropriate?  Should a deal be 1% accretive to be a “good” deal, or 10% accretive?  It is difficult to answer this question in isolation.  This is especially true for a deal comprised largely of cash, where the buyer is forgoing the use of its capital for shareholder dividends or share repurchases in favor of an M&A transaction.  Recent deal announcements often indicate EPS accretion in the mid to high single digits with fully phased-in expense savings.

Contribution Analysis

A contribution analysis is most useful in transactions involving primarily stock consideration.  It compares the buyer and seller’s ownership of the pro forma company with their relative contribution of earnings, loans, deposits, tangible equity, etc.  In a merger of equals transaction, where the two merger parties are roughly similar in size, this type of analysis is important in setting the final ownership percentages of the two banks.

Conclusion

A valuation of a controlling interest may take many forms; fortunately, the strengths of certain valuation methods described here offset the weaknesses of others (and vice versa).  Value ultimately is a range concept, meaning that there seldom is a single value at which a deal fails to make economic sense.  There are good deals, reasonable deals, and dumb deals.  Evaluating a number of valuation indications puts a buyer in the best position to slot a transaction into one of these three categories and to negotiate a deal that accomplishes its objective of enhancing financial performance, controlling risk, and developing new growth opportunities.  It is crucial to remember, though, that deals are tougher to execute in reality than in a spreadsheet.

This concludes our multi-part series examining the analysis and valuation of financial institutions.  While approximately 5,000 banks exist, the industry is not monolithic.  Instead, significant differences exist in financial performance, risk appetite, and growth trajectory.  No valuation is complete without understanding the common issues faced by all banks – such as the interest rate environment or technological trends – but also the entity-specific factors bearing on financial performance, risk, and growth that lead to the differentiation in value observed in both the public and M&A markets.

2020 Fair Value Update and Outlook

A new year brings new opportunities and challenges in the world of fair value accounting. The Wall Street Journal’s recent coverage of the potential changes coming to goodwill impairment testing and the increased scrutiny around private equity portfolio company valuations signals that fair value issues continue to be top of mind for investors, companies, and regulators. Here are four key areas worth watching in 2020.

Goodwill Impairment Testing

The FASB convened a roundtable in late 2019 to hear comments from registrants, investors, and the practitioner community about whether to continue the current system of annual goodwill impairment tests or shift to an amortization model over a set period of time. The responses have been mixed thus far, with some advocating instead for a trigger-based approach, perhaps over an initial period of time following an acquisition. The FASB has indicated that it will continue to discuss the comments during 2020 and no timeline for any changes has been set.

Read our latest thoughts on technical issues surrounding goodwill impairment testing.
>>Click Here

Portfolio Valuation

The AICPA issued final guidance in 2019 for the valuation of portfolio company investments held by venture capital and private equity funds and other investment companies. The new accounting and valuation guidance lays out best practices for preparers, independent auditors, and valuation specialists, and we anticipate that firms and their stakeholders will increasingly expect that their fair value measurements will be done in compliance with the guide.

Read more about some of the new concepts.
>>Click Here

Sign up for our portfolio valuation newsletter.
>>Click Here

Business Combinations

Another robust year for M&A transactions in 2019 meant an increased need for purchase price allocations and contingent consideration valuations. What may have been overlooked is that The Appraisal Foundation has now issued final guidance on the valuation of contingent consideration (earn-outs). One message from the new guidance: scenario-based methods are now being discouraged in favor of more complex, alternative approaches.

Find out more in our recent whitepaper.
>>Click Here

Equity-Based Compensation

The increased scrutiny on PE/VC portfolio company investments inevitably spills over into the realm of valuing private company shares for equity-based compensation purposes. Indeed, the AICPA is in the process of drafting an update to its 2013 accounting and valuation guide on the topic. Another trend we’ve noticed is the increasing prevalence of equity grants with market condition vesting (such as performance of the issuer’s stock relative to a benchmark index) and issuances of incentive units / profit interests. These frequently require specialized fair value measurements.

Read our latest whitepaper on equity-based compensation here.
>>Click Here


Mercer Capital provides a full range of fair value measurement services and opinions that satisfy the scrutiny of auditors, the SEC, and other regulatory bodies. We have broad experience with fair value issues related to public and private companies, financial institutions, private equity firms, start-ups, and other closely held businesses. We also offer corporate finance consulting, financial due diligence, and quality of earnings analyses. National audit firms regularly refer financial reporting valuation assignments to Mercer Capital.

Jones v. Commissioner

Estate of Aaron U. Jones v. Commissioner, T.C. Memo 2019-101
(August 19, 2019)

EXECUTIVE SUMMARY

In May 2009, Aaron U. Jones made gifts to his three daughters, as well as to trusts for their benefit, of interests (voting and non-voting) from two family owned companies, Seneca Jones Timber Co. (SJTC), an S corporation, and Seneca Sawmill Co. (SSC), a limited partnership. These gifts were reported on his gift tax return with a total value of approximately $21 million. The IRS asserted a gift tax deficiency of approximately $45 million on a valuation of approximately $120 million. The Tax Court ruled that value was approximately $24 million, agreeing with the taxpayer’s appraiser.

In this case, the Tax Court again concluded that “tax-affecting” earnings of an S corporation was appropriate in determining value under the income method (see also Mercer Capital’s review of the Kress decision). However, there are several other issues of interest in this case which we discuss further in this article.

BACKGROUND

SSC was established in 1954 in Oregon as a lumber manufacturer.  SSC operated two saw mills – its dimension and stud mill – delivering high quality products that were technologically advanced, allowing SSC to demand a higher price for its products than its competitors.  Early in its history, SSC acquired most of its lumber from Federal timberlands.  As environmental regulations increased, SSC’s access to Federal timberlands became at risk.  Mr. Jones began purchasing timberland in the late 1980s and early 1990s when he became convinced that SSC could no longer rely on timber from Federal lands.

SJTC was formed as an Oregon limited partnership in 1992 by the contribution of those timberlands purchased by Mr. Jones.  SJTC’s timberlands were intended to be SSC’s inventory.  Further, both SSC and SJTC maintained similar ownership groups, with SSC serving as the 10% general partner of SJTC.  As of the date of valuation, SJTC held approximately 1.45 million board feet of timber over 165,000 acres in western Oregon, most of which was acquired in those initial purchases between 1989 and 1992.  In 2008, approximately 89% of SJTC’s harvested logs were sold directly or indirectly to SSC and SJTC charged SSC the highest price that SSC paid for logs on the open market.

GIFT TAX VALUATION

In May 2009, Mr. Jones formed seven family trusts and made gifts to those trusts of SSC voting and nonvoting stock. He also made gifts to his three daughters of SJTC limited partner interests. Mr. Jones filed a timely gift tax return reporting values based upon appraisals prepared by Columbia Financial Advisors as shown in Figure 1 on the next page (Petitioner’s Value). The IRS notice of deficiency asserted values much higher.

A petition was filed in the Tax Court by Mr. Jones in November 2013. Mr. Jones died in September 2014 and was replaced in the Tax Court proceeding by his estate and personal representatives. His estate then engaged another appraiser, Robert Reilly of Willamette Management Associates. Mr. Reilly was noted by the Court to have “performed approximately 100 business valuations of sawmills and timber product companies.”

The original appraiser for the IRS was not noted in the case decision. At trial, the IRS’ valuation expert was Phillip Schwab who, per the Court, has “performed several privately held business valuations.” Additionally, the IRS was noted as having “previously reviewed and completed several business valuations, including several sawmills.”

Their conclusions are presented in Figure 2.

SUMMARY OF THE COURT’S DECISION

Ultimately, the Court sided with Mr. Reilly’s conclusions of values for SSC and SJTC, along with his reported discount for lack of marketability (DLOM).  The only distinction the Court made with Mr. Reilly’s DLOM was to correct a typo wherein the Appendix in Mr. Reilly’s report referred to a 30% DLOM, when in actuality, he had applied a 35% DLOM.  A summary of the Court’s conclusions are shown in Figure 3.

Item 1:  SJTC’s Valuation Treatment as an Asset Holding Company or an Operating Company

The most critical issue surrounding the large difference in the valuation conclusions of SJTC for both experts centered on the valuation approach.  The Court noted that “when valuing an operating company that sells products or services to the public, the company’s income receives the most weight.”  Contrarily, the Court noted “when valuing a holding or investment company, which receives most of its income from holding debt securities, or other property, the value of the company’s assets will receive the most weight.”

A question in this matter: is SJTC an Asset Holding Company or is it an Operating Company? Petitioners’ experts concluded that SJTC was an operating company and relied on an income approach utilizing projections from management. Conversely, one respondent’s experts concluded that SJTC is a natural resource holding company and relied on the asset approach utilizing real estate appraisal on the underlying timberlands.

One of the critical factors the Court relied upon in determining its conclusion of the nature of SJTC’s operations centered on the Company’s operating philosophy.  SJTC relied on a practice called “sustained yield harvesting” which didn’t harvest trees until they were 50 to 55 years old.  As such, SJTC limited the harvest to the growth of its tree farms, even if selling the land or harvesting all of the trees would be the most profitable in the short-term.  As discussed earlier, Mr. Jones began purchasing the timberlands and formed SJTC to supply the lumber to SSC for its long-term operations.

The other argument the Court considered when determining how to treat SJTC was the limited partner units in question.  Specifically, the subject blocks of limited partner units could not force the sale or liquidation of the underlying timberlands.  Recall, SSC maintained the 10% general partner or controlling interest in SJTC and its focus remained on SSC’s continued operations as a sawmill company dependent on SJTC for supplying the majority of its lumber.

Based on these factors, the Court concluded that SSC and SJTC “were so closely aligned and interdependent” that SJTC had to be valued based on its ongoing relationship with SSC, and thus, an income-based approach is more appropriate to value SJTC than a net asset value method.  With this distinction, SJTC was more comparable to an operating company and less comparable to a traditional Timber Investment Management Organization (TIMO), Real Estate Investment Trust (REIT), or other holding or investment company.   

Item 2:  Reliance of Revised Management Projections in Valuation of SJTC and Impact of Economic Conditions

Both of Petitioner’s experts relied on management projections in the underlying assumptions of their discounted cash flow (DCF) analyses to value SJTC.  The original appraisal utilized management projections that were included in the prior annual report.  For trial, Mr. Reilly utilized revised projections from April 2009 in his DCF analysis.

Respondent challenged the use of the revised projections, despite the fact that their own second expert, Mr. Schwab, also used the revised projections in his guideline publicly traded company method.  He chose to average the revised projections with those from the most recent annual report.

The Court specifically noted the economic conditions at the date of valuation, highlighting the volatility during the recession years.  As such, the Court determined the revised projections were the most current as of the date of valuation and included management’s opinion on the climate of their market and operations.  The impact of the current economic conditions is also referenced by the Court in another key takeaway that we will discuss later.

Item 3: Tax-Affecting Earnings in the Valuations of SJTC

Mr. Reilly computed after-tax earnings based on a 38% combined proxy for federal and state taxes. He further computed the benefit of the dividend tax avoided by the partners of SJTC, by estimating a 22% premium based on a study of S Corporation acquisitions. Respondent argued that since SJTC is a partnership, the partners would not be liable for tax at the entity level and there is no evidence that SJTC would become a C corporation. Therefore, respondent argued that the entity level tax rate should be zero.

The Court concluded that Mr. Reilly’s tax-affecting “may not be exact, but is more complete and convincing than respondent’s zero tax rate.”  The Court also noted that the contention from respondent on this tax-affecting issue seems to be more of a “fight between lawyers” as the criticism appeared more in trial briefs than in expert reports. In fact, respondent’s expert, Mr. Schwab, argued that tax-affecting was improper because SJTC is a natural resources holding company and therefore its “rate of return is closer to the property rates of return” rather than challenging the lack of an actual entity level tax.

Item 4:  Market Approach for SJTC

The Court and respondent’s expert agreed with Mr. Reilly’s market approach for the valuation of SJTC.  With little to no disagreement, the key takeaway here is on Mr. Reilly’s analysis.  The Court detailed the analysis by mentioning that Mr. Reilly selected six guideline companies.  The Court also cited the analysis and reasoning behind Mr. Reilly’s selection of pricing multiples slightly above the minimum indications of the guideline companies. Specifically, Mr. Reilly noted that SJTC’s revenue and profitability for the most recent twelve months before the valuation date were below those of the guideline companies.  Thus, he accounted for these differences in financial fundamentals in his selection of the guideline pricing multiples.    

Item 5:  Intercompany Debt between SJTC and SSC

Respondent argued that Mr. Reilly erred by excluding the receivable held by SSC and the corresponding liability of SJTC. Further, respondent contended that Mr. Reilly’s treatment of SSC’s receivable from SJTC as an operating asset, rather than a non-operating asset, reduced the value of SSC under his income approach since a non-operating asset was not added to that value.

On this issue, the Court weaved in earlier themes regarding the symbiotic relationship of the two companies and also the present economic conditions on the date of valuation to make its conclusion.  The Court agreed with Mr. Reilly that the intercompany debt could be removed as a clearing account based on the idea that both companies operate as “simply two pockets of the same pair of pants.”  The Court rejected respondent’s theories that this treatment of intercompany debt was only to avoid a negative asset valuation of SJTC and to reduce the value of SSC by not including the receivable as a non-operating asset.

The Court referenced the relationship of the two companies and how the joint credit agreements of the two companies were secured by SJTC’s timberlands. The Court recognized that SSC could not have obtained separate third-party loans without the assistance of SJTC’s underlying timberlands as collateral. A further detail of the two companies’ relationship was revealed earlier in this decision. 2009 economic conditions also included subprime mortgage lending crises, particularly in the housing market. Around this time, SSC was anticipating a shift in the market from green lumber to dry lumber. Dry lumber production required SSC to build dry kilns and a boiler in a larger renewable energy plant project. Because of economic conditions, SSC was not able to obtain the construction loans to finance the renewable energy plant for itself or with another planned related entity. Instead, SSC was forced to borrow against the timberlands of SJTC.

Ultimately, the Court viewed the two companies (SSC and SJTC) as a single business enterprise and concluded that Mr. Reilly’s treatment of the intercompany debt captured their relationship.

Item 6:  Valuation of SSC – Treatment of General Partner Interest in SJTC

Respondent’s criticisms of Mr. Reilly consisted of three items:

  1. The treatment of Intercompany debt between the two companies
  2. Tax-affecting earnings
  3. The treatment of SSC’s general partner interest. The Court handled the intercompany debt and tax-affecting treatment consistently with SJTC’s valuation

Mr. Reilly captured the value of SSC’s general partner interest in SJTC by projecting a portion of the expected partnership income in his projections. Specifically, Mr. Reilly projected $350,000 annually for SSC’s general partner interest based on an analysis of the 5-year and 10-year historical distributions from SJTC.

Respondent claimed that this approach undervalued SSC’s general partner interest by not considering its control over SJTC and treating it as a non-operating asset to be valued by the net asset value method.

The Court concluded that SSC’s general partner interest in SJTC is an operating asset again citing the single business enterprise relationship between the two companies.  Further, the value of SSC’s general partner interest is best estimated by the expected distributions that it would expect to receive.

Item 7:  Buy-Sell Agreement Items

Although not directly discussed and cited in any of the Court’s factors that we have discussed so far, the decision did highlight certain elements from SSC’s and SJTC’s buy-sell agreements as we noted.  Both buy-sell agreements contained language that prohibited the sale of the entity or transfers within the units/shares that would jeopardize the current tax status of the Companies as an S Corporation (SSC) and Limited Partnership (SJTC), respectively.  Both agreements called for discounts for lack of control, lack of marketability, and lack of voting rights of an assignee (where applicable) to be considered. Finally, both agreements stated that the valuations of the entities should consider the anticipated cash distributions allocable to the units/shares.

CONCLUSIONS

While the Court’s decision to allow the tax-affecting of earnings (like in the Kress case) in the valuations of SSC and SJTC will dominate the headlines, there are additional takeaways from the case that impact the valuations.  Of note, the disparity in experience of the appraisers involved, consideration of the current economic conditions, and the purpose and nature of the business relationship of the two companies seemed to influence the Court’s conclusions.  Finally, the distinction and eventual valuation treatment of SJTC as an operating company rather than a holding company was of particular interest to us.

Do Win/Loss Records Affect Major League Baseball Revenues and Attendance?

Many people believe that the win/loss ratio doesn’t have much effect on revenues and attendance.  They believe the local team has loyal fans who will attend games despite their performance.   We investigate that assumption in this article focusing on Major League Baseball (MLB) by sampling a top tier, middle tier, and lowest tier team.

We analyze average season attendance of the league over the last five years and then track the three-team sample’s attendance and on-field performance.

We have selected three teams to review their attendance vs. winning percentage, along with their playoff and World Series performance.  Our sample consists of the Los Angeles Dodgers, the Texas Rangers and the Miami Marlins.

As a reference point, average season attendance for the MLB reached a peak in recent years at 2.5 million in 2007 for the American League and 2.8 million for the National League.  The MLB averages dropped in the subsequent years and were finally steady around 2.3 million for the A.L. and 2.5 million for the N.L. during the next ten years. League attendance average declined, however, by 140,000 to 2,161,376 in 2018 and 2,039,521 in 2019.

Los Angeles Dodgers

The Dodgers attendance in 2007 was 3.9 million and stayed above 3.4 million for three years.  This figure dropped to 2.9 million in 2011 yet returned to 3.7 million by 2013.  Recently, season attendance has slowly climbed to approximately 4 million in 2019, marking an all-time team high.

This growth was greatly influenced by the Dodgers being in the World Series in 2017 and 2018, which helped push 2019 to a record high attendance.  (See Table 1 for details)

Texas Rangers

The Texas Rangers have experienced a different attendance history.  They peaked in 2012 at 3.5 million after playing in the World Series in 2011 and the playoffs in 2012.  The team didn’t make the playoffs in 2013 and 2014 and attendance dropped to 3.2 million and 2.7 million, respectively.  The win/loss record dropped significantly from about 59% in 2011 to 41% in 2014.

Attendance followed the same trend by dropping 450,000 to 2.7 million in 2014.  Even when the team made the playoffs in 2015, attendance fell to 2.5 million as a result of their poor record in 2014. The team’s 2015 win/loss ratio was near 59% and they made the playoffs, but not the World Series. In the following year, attendance increased to 2.7 million.  The win/loss ratio dropped below 50% in 2017 to 2019 and they missed the playoffs each year.  As a result, attendance dropped steadily to 2.1 million in 2019, a decrease of over 1.3 million people, or 38% from their peak in 2012.  (SeeTable 2 for details)

Miami Marlins

The Miami Marlins clearly represent the bottom tier of the MLB in many categories.  They built a brand-new state of the art ballpark in 2012 and attendance averaged about 1.7 million from 2014 to 2017.  In the fall of 2017, the Derek Jeter group bought the team.  and the new owners quickly traded notable high-priced players to other teams, including the NY Yankees, in order to reduce their losses.  The new ownership group was hoping to stabilize attendance near the 1.7 million mark, but instead dropped to 811,000 in both 2018 and 2019;  367,000 less than the next worst attendance in MLB, which was Tampa Bay, and about 500,000 less than the third worst team, the Baltimore Orioles.   (SeeTable 3 for details)

Conclusion

Without attempting to do a statistical analysis, what does the data mean?  Yes, the quality of the players counts – especially if the win/loss record corresponds, however, winning percentage also impacts the ability to get into the playoffs and ultimately the World Series. It is clear from our experience and from the three-team sample that win/loss ratios have a major effect on MLB home stadium attendance.

In Game Leaders Esports Summit Insights

The In Game Leaders (IGL) Winter Summit took place January 3, 2020 at the esports Stadium Arlington. IGL originated as a capstone project for summer interns at esports Stadium Arlington. The primary purpose of the Summit was to “[provide] the opportunity to learn from industry professionals and collegiate leaders as they speak about the various career paths in esports. IGL strives to teach parents and students how to develop a sustainable career in the rapidly-growing esports field.”

This second event (the first Summit was held on August 19, 2019) consisted of several panels that addressed collegiate esports, professional esports, as well as marketing and event management. The event drew approximately 150 people. We recap a few key panels below.

Collegiate Esports Panel

Alex Rocha (UT Arlington), Eric Aaberg (UT Dallas), Austin Espinoza (UT), Dylan Wray (UNT)

Traditional sports and esports at the collegiate level share some similarities. For example, not only do the teams practice and compete, but they workout and train in order to play to their potential. The players are one piece of the team, as there are also managers, coaches, and streamers that add value to the organization. Several universities across the country offer partial and full scholarships. As of December 2019, there were over 125 schools around the United States with esports programs.

Collegiate esports programs generally fall into two categories: club or varsity. Varsity esports programs enjoy the administrative and financial support of their educational institution while club esports organizations are student-run and have limited financial backing from their educational institution. The primary goal of most collegiate esports clubs is to become a varsity sport at their school.

A distinction between esports and traditional sports stems from the recruiting platform. In collegiate esports, recruiting currently consists of attracting students who are already on campus as opposed to recruiting from high schools. Although the recruiting landscape is less structured than in traditional sports, esports teams have had success in gaining interest from active students.

The main games played in collegiate esports include: Overwatch, League of Legends, Rocket League, Hearthstone, Super Smash Brothers, and Call of Duty. The teams encourage the gaming community on campus to join the organization, whether it be as an analyst, coach, or player. The teams emphasize that there is a position for everyone and that a student does not have to be the best at every game to bring value and compete.

Professional Esports Organizations

Kyle Bautista (COO, Complexity), Hector Rodriguez (CEO, NRG/Huntsmen), Mike Rufail (Founder/CEO, Envy)

The esports industry is turning heads and opening eyes as it becomes compared to the traditional sports leagues. The panelists, often recognized as leaders in the industry, emphasized their passion for the rapidly growing esports space. They highlighted that because esports is relatively new, and because games have shorter shelf lives, it is harder to gain significant experience.

esports is categorized as a sector in the media and entertainment industry because it is able to capitalize on the vast audience that is watching or streaming. The panelists each recalled their favorite esports events,  and all described those events as having the same euphoric feel as a traditional sports game. The crowd erupts for clutch play in a Call of Duty World Championship the same as a huge play in a Super Bowl. Each panelists has been a part of the evolution of the space and all see it heading in the right direction with time.

The franchise model that has developed in traditional sports has made its way to the esports platform. The industry is attempting to follow the blueprint of the major professional sports leagues. The industry leaders agreed that the structure is beneficial, but also emphasized that time, competition, and the willingness to learn and collaborate will take esports to the next level.

Keynote Speakers

Simon Bennett and Markel Lee (AOE Creative)

As the esports industry experiences rapid growth, it is important for teams and companies to consider their brand identity. Bennett and Lee illustrated the struggles that companies within the industry often face when attempting to establish a brand or marketing initiative. Rather than simply making a brand logo that looks “cool,” Bennett and Lee challenge players in the industry to create a brand that captures the message they are attempting to create.

Simply put, “Do not build a brand, build a legacy.” – Markel Lee

Bennett and Lee were excited about the power of community marketing. Community marketing enables a vast audience to connect and establish a relationship with the message or objective that is being conveyed. The example given was the Marshmello Fortnite concert – the most attended concert in history. The concert was able to be seen by the virtual community that was playing Fortnite at the time of the event. This brilliant marketing scheme was able to capture over 10 million players at once.

Marketing and Event Management

Kyle Stephenson (Gearbox), John Davidson (PRG), Justin Varghese (Dreamhack)

Stephenson, Davidson, and Varghese explained the challenges and satisfactions of creating events in the esports space. With sponsorships providing the biggest source of revenue in the industry, events are extremely important to execute well. Creating a debut event or launching a new destination is a complicated process that takes exceptional diligence both before and after it occurs. In order to measure the success of the event, it is important to first set expectations. The three panelists agreed that expectations must be accurate going into an event.

Attracting a digitally native audience can be difficult, but it requires creating a “fear of missing out” (FOMO) in order to capture as many people as possible. For those unable to attend the first event, attracting them to the second event is also important.

A specific industry challenge is that most esports do not have a defined end. With no set run-time for most games, event managers must be prepared for every scenario. There can also be challenges that arise at an event such as power outages, which causes a delay in the audience’s experience. Putting the challenges aside, the key to executing a successful event is creating a fair and pure playing environment and enjoyable fan experience by providing an exceptional experience to as many people as possible.

Conclusion

The second IGL Summit built upon the success of the first. There was increased attendance and intriguing panel discussions. The overarching theme communicated by the panelists was that as an industry, esports is still developing. Due to the relative immaturity of the industry, best practices are not concrete and player movement mechanisms are nebulous. However, there was a general sense of optimism for the industry. esports has made great strides over the past few years but still has plenty of room to grow. According to an article by Syracuse University, the number of projected esports viewers in the U.S. will reach 84 million in 2021, second only to the NFL. There has been an increase in transactions within the industry as well with one esports organization acquiring another for approximately $100 million.

Quality Of Earnings Study: The “Combine” to Help Harvest Top FinTech Acquisition Targets

As we find ourselves at the end of the decade, many pundits are considering what sector will be most heavily influenced by the disruptive impact of technology in the 2020s. Financial services and the potential impact of FinTech is often top of mind in those discussions. As I consider the potential impact of FinTech in the coming decade, I am reminded of the Mark Twain quote that “History doesn’t repeat itself but it often rhymes.”

A historical example of technological progress that comes to mind for me is the combine, a machine designed to efficiently harvest a variety of grain crops. The combine derived its name from being able to combine a number of steps in the harvesting process. Combines were one of the most economically important innovations as they saved a tremendous amount of time and significantly reduced the amount of the population that was engaged in agriculture while still allowing a growing population to be fed adequately. For perspective, the impact on American society from the combine’s invention was tremendous as roughly half of the U.S. population was involved in agriculture in the 1850s and today that number stands at less than 1%.

As I ponder the parallels between the combine’s historical impact and FinTech’s potential, I consider that our now service based economy is dependent upon financial services, and FinTech offers the potential to radically change the landscape. From my perspective, the coming “combine” for financial services will be not from one source or solution, but from a wide range of FinTech companies and traditional financial institutions that are enhancing efficiency and lowering costs across a wide range of financial services (payments, lending, deposit gathering, wealth management, and insurance). While this can be viewed as a negative by some traditional incumbents in the space, it may be a saving grace as we start the decade with the lingering effects of a prolonged historically low and difficult interest rate environment, and many traditional players are still laden with their margin dependent revenue streams and higher cost, inefficient legacy systems. Similar to the farmers adopting higher tech planting and harvesting methods through innovations like the combine, traditional incumbents like bankers, RIAs, and insurance companies will have to determine how to selectively build, partner, or acquire FinTech talent and companies to enhance their profitability and efficiency. Private equity and venture capital investors will also continue to be attracted to the FinTech sector given its potential.

As the years in the 2020s march on, FinTech acquirers and traditional incumbents face a daunting task to evaluate the FinTech sector. Reports vary but generally indicate that over 10,000 FinTechs have sprouted up across the globe in the last decade and separating the highly valued, high potential business models (i.e, the wheat) from the lower valued, low potential ones (i.e., the chaff) will be challenging. Factor in the complicated nature of the regulatory/compliance overlay and investors, acquirers, and traditional incumbents face the daunting task of analyzing the FinTech sector and the companies within it.

As a solution to this potential problem, the efficient operations and historical lessons learned in the agricultural sector from the combine may again provide insights for buyers of FinTech companies to learn from. For example, the major professional sports leagues in the U.S. all have events called combines where they put prospective players through drills and tests to more accurately assess their potential. In these situations, the team is ultimately the buyer or investor and the player is the seller. Pro scouts are most interested in trying to project how that player might perform in the future for their team. While a player may have strong statistics in college, this may not translate to their future performance at the next level so it’s important to dig deeper and analyze more thoroughly. For the casual fan and the players themselves, it can be frustrating to see a productive college player go undrafted while less productive players go highly drafted because of their stronger performance at the combine.

While not quite as highly covered by the fans and media, a similar due diligence and analysis process should take place when acquirers examine a FinTech acquisition target. This due diligence process can be particularly important in a sector like FinTech where the historical financial statements may provide little insight into future growth and earnings potential for the underlying company. One way that acquirers are able to better assess potential targets is through a process similar to a sports combine called a quality of earnings study (QoE). In this article, we give a general overview of what a QoE is as well as some important factors to consider.

What is a Quality of Earnings Study? A QoE study typically focuses on the economic earning power of the target. A QoE combines a number of due diligence processes and findings into a single document that can be vitally helpful to a potential acquirer in order to assess the key elements of a target’s valuation: core earning power, growth potential, and risk factors. Ongoing earning power is a key component of valuation as it represents an estimate of sustainable earnings and a base from which long term growth can be expected. This estimate of earning power typically considers trying to assess the quality of the company’s historical and projected future earnings. In addition to assessing the quality of the earnings, buyers should also consider the relative riskiness of those earnings as well as potential pro-forma synergies that the target may bring in an acquisition.

Analysis performed in a QoE study can include the following:

  1. Profitability Procedures. Investigating historical performance for impact on prospective cash flows. EBITDA analysis can include certain types of adjustments such as: (1) Management compensation add-back; (2) Non-recurring items; (3) Pro-forma adjustments/synergies
  2. Customer Analysis. Investigating revenue relationships and agreements to understand the impact on prospective cash flows. Procedures include: (1) Identifying significant customer relationships; (2) Gross margin analysis; and (3) Lifing analysis
  3. Business and Pricing Analysis. Investigating the target entities positioning in the market and understanding the competitive advantages from a product and operations perspective. This involves: (1) Interviews with key members of management; (2) Financial analysis and benchmarking; (3) Industry analysis; (4) Fair market value assessments; and (5) Structuring

These areas are broad and may include a wide array of sub-areas to investigate as part of the QoE study. Sub-areas can include:

  • Workforce / employee analysis
  • A/R and A/P analysis
  • Intangible asset analysis
  • A/R aging and inventory analysis
  • Location analysis
  • Billing and collection policies
  • Segment analysis
  • Proof of cash and revenue analysis
  • Margin and expense analysis
  • Capital structure analysis
  • Working capital analysis

For high growth technology companies where the analysis and valuation is highly dependent upon forecast projections, it may also be necessary to analyze other specific areas such as:

  • The unit economics of the target. For example, a buyer may want a more detailed estimate or analysis of the some of the target’s key performance indicators such as cost of acquiring customers (CAC), lifetime value of new customers (LTV), churn rates, magic number, and annual recurring revenue/profit.
  • A commercial analysis that examines the competitive environment, go-to-market strategy, and existing customers perception for the company and its products.

This article discusses a number of considerations that buyers may want to assess when performing due diligence on a potential FinTech target. While the ultimate goal is to derive a sound analysis of the target’s earning power and potential, there can be a number of different avenues to focus on, and the QoE study should be customized and tailored to the buyer’s specific concerns as well as the target’s unique situations. It is also paramount for the buyer’s team to keep the due diligence process focused, efficient, and pertinent to their concerns. For sellers, a primary benefit of a QoE can be to help them illustrate their future potential and garner more interest from potential acquirers.

Mercer Capital’s focused approach to traditional quality of earnings analysis generates insights that matter to potential buyers and sellers. Leveraging our valuation and advisory experience, our quality of earnings analyses identify and focus on the cash flow, growth, and risk factors that impact value. Collaborating with clients, our senior staff identifies the most important areas for analysis, allowing us to provide cost-effective support and deliver qualified, objective, and supportable findings. Our goal is to understand the drivers of historical performance, unit economics of the target, and the key risk and growth factors supporting future expectations. Our methods and experience provide our clients with a fresh and independent perspective on the quality, stability, and predictability of future cash flows.

Our methodologies and procedures are standard practices executed by some of the most experienced analysts in the FinTech industry. Our desire is to provide clients with timely and actionable information to assist in capital budgeting decisions. Combined with our industry expertise, risk assessment, and balanced return focus, our due diligence and deal advisory services are uniquely positioned to provide focused and valued information on potential targets.


Originally published in Mercer Capital’s Value Focus: FinTech Industry Newsletter Year-End 2019.

2020 Outlook: Good Fundamentals, Moderate Valuations but Limited EPS Growth

Bank fundamentals, which are discussed in more detail below, did not change a lot between 2018 and 2019; however, bank stock prices and the broader market posted strong gains as shown in Table 1 following a short but intense bear market that bottomed on Christmas Eve 2018. Our expectation is that 2020 will not see much change in fundamentals either, while bank stocks will require multiples to expand to produce meaningful gains given our outlook for flattish earnings.

Fed Drives the Market Rebound

The primary culprit for the 4Q18 plunge and subsequent 2019 rebound in equity prices was the Fed, which has a propensity to hike until something breaks according to a long standing market saw. A year-ago the Fed had implemented its ninth hike in short-term policy rates that it controls despite the vocal protests of the President and, more importantly, the credit markets as reflected in widening credit spreads and falling yields on Treasury bonds and forward LIBOR rates.

One can debate how much weight the Fed places on equity markets, but it has always appeared to us that they pay close attention to credit market conditions. When the high yield bond and leverage loan markets shutdown in December 2018, the Fed was forced to pivot in January and back away from rate hikes after forecasting several for 2019 just a few months earlier. Eventually, the Fed was forced to reduce short rates three times and resume expansion of its balance sheet in the fourth quarter after halting the reduction (“quantitative tightening”) in mid-year.

Markets lead fundamentals. Among industry groups bank stocks are “early cyclicals,” meaning they turn down before the broader economy does and tend to turn up before other sectors when recessions bottom. One take from the price action in banks is that the economy in 2020 will be good enough that credit costs will not rise dramatically. Otherwise, banks would not have staged as strong a rebound as occurred.

Likewise, somewhat tighter spreads on B- and BB-rated high yield bonds relative to U.S. Treasuries (option adjusted spread, “OAS”) since the Fed eased is another data point that credit in 2020 will not see material weakening. The stable-to-tighter spreads in the high yield market today can be contrasted with 2007 when OAS began to widen sharply even after the Fed began to cut rates and the U.S. Treasury curve steepened as measured by the spread between the yield on the two-year and 10-year notes.

Bank Fundamentals

Bank fundamentals are in good shape even though industry net income for the first three quarters of 2019 increased nominally to $181 billion from $178 billion in the comparable period in 2018. On a quarterly basis, third quarter earnings of $57 billion were below the prior ($63 billion) and year ago ($62 billion) quarters. Not surprisingly, earnings pressure emerged during the year as what had been expanding NIMs during 2017 and 2018 began to contract due the emergence of a flat-to-inverted yield curve, a reduction in 30/90-day LIBOR which serves as a base rate for many loans, and continuation of a highly competitive market for deposits. Also, loan growth slowed in 2019—especially for larger institutions.

As shown in Table 2, core metrics such as asset quality and capital are in good shape, while profitability remains high. Our outlook for 2020 is for profitability to ease slightly due to incrementally higher credit costs and a lower full year NIM although stabilization seems likely during 2H20. Nonetheless, ROCE in the vicinity of 10- 11% and ROTCE of 13-14% for large community and regional banks seems a reasonable expectation.

EPS growth will be lacking, however. Wall Street consensus EPS estimates project essentially no change for large community and regional banks, while super regional banks are projected to be slightly higher at 3%. Money center banks (BAC, C, GS, JPM, MS, and WFC) reflect about 6% EPS growth, which seems high to us even though the largest banks tend to be more active in repurchasing shares relative to smaller institutions where excess capital is allocated to acquisitions, too.

The Fed—Presumably on Hold

In the December 2018 issue of Bank Watch we opined it was hard to envision the Fed continuing to raise short-term rates even though the Fed forecasted further hikes. We further cited the potential for rate cuts. Our reason for saying so was derived from the market rather than economists because intermediate- and long-term rates had decidedly broken an uptrend and were heading lower.

As the calendar turns to 2020, the Fed has indicated no changes are likely for the time being. The market reflects a modest probability that one more cut will be forthcoming, but to do so in an election year probably would require long rates to fall enough to meaningfully invert the Treasury curve unlike the nominal inversion which occurred in mid-2019.

As it relates to bank fundamentals, the impact on NIMs will depend upon individual bank balance sheet compositions. Broadly, however, a scenario of no rate hikes implies NIMs should stabilize in 2H20 as higher cost CDs and wholesale borrowings rollover at lower rates. Also, if the Fed continues to expand its balance sheet (presently it is doing so through only purchasing T-bills through support of the repo market) then assets may remain well bid. All else equal, stable to rising prices in the capital markets usually are supportive of credit quality within the banking system.

Bank Valuations—Rebound from Year-End 2018 “Bargains”

A synopsis of bank valuations is presented in Table 3 in which current valuations for the market cap indices are compared to year-end 2018 and year-end 2017 as well as multi-year medians based upon daily observations over the past 20 years.

The table illustrates the important concept of reversion to the mean. Valuations were above average as of year-end 2017 due to policy changes that occurred with the November 2016 national elections that culminated with the enactment of corporate tax reform in late 2017. One year later valuations were “cheap” as a result of the then bear market that reflected concerns the Fed would hike the U.S. into a recession.

Despite the rebound in prices and valuation multiples during 2019, bank stocks enter 2020 with moderate valuations provided the market (and us) have not miscalculated and earnings are poised to fall sharply. Money center and super-regional banks are trading for median multiples of about 10x and 11x consensus 2020 earnings. Regional and large community banks, which include many acquisitive banks, trade for respective median multiples of 12x and 13x.

An important point is that valuation is not a catalyst to move a stock; rather, valuation provides a margin of safety (or lack thereof) and thereby can provide additional return over-time as a catalyst such as upward (or downward) earnings revisions can cause a multiple to expand or contract. Looking back to last year one might surmise the rebound in valuations reflects the market’s view that the Fed avoided hiking the U.S. into recession.

Bank M&A—2020 Potentially a Great year

M&A activity has been robust with bank and thrift acquisitions since 2014 exceeding 4% of the industry charters at the beginning of each year. It appears once the final tally is made, upwards of 275 institutions will have been acquired in 2019, which would represent almost 5% of the industry. With only a handful of new charters granted since the financial crisis the industry is shrinking fast. As of Sept. 30, there were 5,256 U.S. banks and thrifts, down from about 18,000 in 1985.

While activity was steady at a high level in 2019, the most notable development was market support for four merger-of-equals (“MOE”) in which the transaction value exceeded $1.0 billion. The largest transaction closed Dec. 9 when BB&T Corp. and SunTrust merged to form Truist Financial Corp. Others announced this year include tie-ups between TCF Financial Corp./Chemical Financial Corp., First Horizon National Corp./IBERIABANK Corp., and Texas Capital Bancshares Inc./Independent Bank Group Inc. Although not often pursued, we believe MOEs are a logical transaction that if well executed provide significant benefits to community bank shareholders.

The national average price/tangible book multiple eased to 157% from 173% in 2018, while the median price/earnings (trailing 12 months as reported) declined to 16.8x from 25.4x (~21x adjusted for the impact of corporate tax reform). The reduction was not surprising given low public market valuations that existed at the beginning of 2019 because acquisition multiples track public market multiples with a lag.

We see 2020 shaping up as a potentially great year for bank M&A. The backdrop is an M&A trifecta: buyer and seller earnings will likely be flattish primarily due to sluggish loan growth and lower NIMs; asset quality is stable; and stock prices are higher, meaning buyers can offer better prices (but less value) to would-be sellers. Also, the capital markets remain wide open for banks to issue subordinated debt and preferred equity at very low rates to fund cash consideration not covered by existing excess capital.

Summing it Up

This year appears to be the opposite of late 2018 in which a strong market for bank stocks is predicting continuation of solid fundamentals and possibly better than expected earnings. Nonetheless, an environment in which earnings growth is expected to be modest at best likely will result in limited gains in bank stocks given the rebound in valuations that occurred in 2019.


Originally published in Bank Watch, December 2019.

Lessons from Recent Engagements

In our family law practice, we serve as valuation and financial forensic expert witnesses. There is typically another valuation expert on “the other side.” In several recent engagements, the following topics, posed as questions here, were raised as points of contention. We present them here to help the reader, whether you are a family law attorney or a party to a divorce, understand certain valuation-related issues that may be raised in your matter.

Should Your Expert Witness Be a Valuation or Industry Expert?

The financial and business valuation portion of a litigation is often referred to as a “battle of the experts” because you have at least two valuation experts, one for the plaintiff and one for the defendant. Hopefully your valuation expert has both valuation expertise and industry expertise. While industry expertise is not necessary in every engagement, it can be helpful in understanding the subtleties of the business in question.

Does the Appraisal Discuss Local Economic Conditions and Competition Adequately?

Most businesses are dependent on the climate of the national economy as well as the local economy. For businesses who have a national client base, the health of the national economy trumps any local or regional economy. However, many of the businesses we value in divorce engagements are more affected by changes in their local and regional economy. It’s important for a business appraiser to understand the difference and to be able to understand the effects of the local/regional economy on the subject business. There is also a fine balance between understanding and acknowledging the impact of that local economy without overstating it. Often some of the risks of the local economy are already reflected in the historical operating results of the business.

If There Are Governing Corporate Documents, What Do They Say About Value, and Should They Be Relied Upon?

Many of the corporate entities involved in litigation have sophisticated governance documents that include Operating Agreements, Buy-Sell Agreements, and the like. These documents often contain provisions to value the stock or entity through the use of a formula or process. Whether or not these agreements are to be relied upon in whole or in part in a litigated matter is not always clear. In litigated matters, focus will be placed on whether the value concluded from a governance document represents fair market value, fair value, or some other standard of value.

Two common questions that arise concerning these agreements are:

  1. Has an indication of value ever been concluded using the governance document in the history of the business (in other words, has the business been valued using the methodology set out in the document)?
  2. Have there been any transactions, buy-ins, or redemptions utilizing the values concluded in a governance document?

These are important questions to consider when determining the appropriate weight to place on a value indication from a governance document. In divorce matters, the out-spouse is often not bound by the value indicated by the governance document since they were not a signatory to that particular agreement. It is always important to discuss this issue with your attorney.

Have There Been Prior Internal Transactions of Company Stock and at What Price?

Similar to governance documents, internal transactions are a possible valuation data point. A good appraiser will always ask if there have been prior transactions of company stock and, if so, how many have occurred, when did they occur, and at what terms did they occur? There is no magic number, but as with most statistics, more transactions closer to the date of valuation can often be considered as better indicators of value than fewer transactions further from the date of valuation.

An important consideration in internal transactions is the motivation of the buyer and seller. If there have been multiple internal transactions, appraisers have to determine the appropriateness of which transactions to possibly include and which to possibly exclude in their determination of value. Without an understanding of the motivation of the parties and of the specific facts of the transactions, it becomes trickier to include some, but exclude others. The more logical conclusion would be to include all of the transactions or exclude all of the transactions with a stated explanation.

What Do the Owner’s Personal Financial Statements Say and Are They Important?

Most business owners have to submit personal financial statements as part of any guarantee on financing. The personal financial statement includes a listing of all of the assets and liabilities of the business, typically including some value assigned to the value of the business. In divorce matters, these documents are important as yet another valuation data point.

One view of the value placed on a business in an owner’s personal financial statement is that no formal valuation process was used to determine that number; so, at best, it’s a thumb in the air, blind estimate of value. The opposing view is the individual submitting the personal financial statement is attesting to the accuracy and reliability of the financial figures contained in document under penalty of perjury. Further, some would say that the value assigned to the business has merit because the business owner is the most informed person regarding the business, its future growth opportunities, competition, and the impact of economic and industry factors on the business.

For an appraiser, it’s not a good situation to be surprised by the existence of these documents. A good business appraiser will always ask for them. The business value indicated in a personal financial statement should be viewed in light of value indications under other methodologies and sources of information. At a minimum, personal financial statements may require the expert to ask more questions or use other factors, such as the national and local economy, to explain any difference in values over time.

Do You Understand Normalizing Adjustments and Why They Are Important?

Normalizing adjustments are adjustments made for any unusual or non-recurring items that do not reflect normal business operations. During the due diligence interview with management, an appraiser should ask if the business has non-recurring or discretionary expenses and are personal expenses of the owner being paid by the business? Comparing the business to industry profitability data can help the appraiser understand the degree to which the business may be underperforming.

An example of how normalizing adjustments work is helpful. If a business has historically reported 2% EBITDA (earnings before interest, taxes, depreciation, and amortization) and the industry data suggests 5%, the financial expert must analyze why there is a difference between these two data points and determine if there are normalizing adjustments to be applied. Let’s use some numbers to illustrate this point. For a business with revenue of $25 million, historical profitability at 2% would suggest EBITDA of $500,000. At 5%, expected EBITDA would be $1,250,000, or an increase of $750,000. In this case, the financial expert should analyze the financial statements and the business to determine if normalization adjustments are appropriate which, when made, will reflect a more realistic figure of the expected profitability of the business without non-recurring or personal owner expenses.

Conclusion

There are many other issues a valuation expert faces in divorce matters; however, the issues presented here were top of mind for us because they were present in recent engagements. Valuation can be complex. Serving as an expert witness can be challenging as well. However, having an expert with valuation expertise and experience is an advantageous combination in divorce matters. In future articles, we’ll discuss other issues of importance to hopefully help you become a more knowledgeable user of valuation services. In the meantime, if you have a valuation or financial forensics issue, feel free to contact us to discuss it in confidence.

Originally published in Mercer Capital’s Tennessee Family Law Newsletter, Third Quarter 2019.

Community Bank Valuation (Part 4): Valuing Minority Interests

In the June 2019 BankWatch we began a multi-part series exploring the valuation of community banks. The first segment introduced key valuation drivers: various financial metrics, growth, and risk. The second and third editions described the analysis of bank and bank holding company financial data with an emphasis on gleaning insights that affect the valuation drivers. We now conclude our series by assembling these pieces into the final product, a valuation of a specific bank.

While it would streamline the valuation process, there is no single value for a bank that is applicable to every conceivable scenario giving rise to the need for a valuation. Instead, valuation is context dependent. This edition of the series focuses on the valuation of minority interests in banks, which do not provide the ability to dictate control over the bank’s operations. The next edition focuses on valuation considerations applicable to controlling interests in banks that arise in acquisition scenarios.

Valuation Approaches

Valuation specialists identify three broad valuation approaches within which several valuation methods exist:

  1. The Asset Approach develops a value for a bank’s common equity based on the difference between its assets and liabilities, both adjusted to market value. This approach is less common in practice, given analysts’ focus on banks’ earnings capacity and market pricing data. In theory, a rigorous application of the asset approach would require determining the value of the bank’s intangible assets, such as its customer relationships, which introduces considerable complexity.
  2. The Market Approach provides indications of value by reference to actual transactions involving securities issued by comparable institutions. The obvious advantage of this approach is the coherence between the goal of the valuation itself (the derivation of market value) and the data used (market transactions). The disadvantage, though, is that perfectly comparable market data seldom exists. While we will not cover the topic in this article, transactions in the subject bank’s common stock, which often occur for privately held banks due to their frequently widespread ownership and stature in the community, may serve as another indication of value under the market approach.
  3. The Income Approach includes several methods that convert a cash flow stream (such as earnings or dividends) into a value. Two broad subsets of the income approach exist – single period capitalization methods and discounted cash flow methods. For bankers, a single period capitalization is analogous to a net operating income capitalization in a real estate appraisal; it requires an earnings metric and a capitalization multiple. Alternatively, bank valuations often use projection-based methodologies that convert a future stream of benefits into a value. The strengths and weaknesses of a projection-based methodology derive from a commonality – it requires a forecast of future performance. While creating such a forecast is consistent with the forward-looking nature of investor returns, predicting the future is, as they say, difficult.

The following discussion focuses on the valuation methodologies used most commonly for banks, the comparable company method and the discounted cash flow method.

Comparable Company Method

Bank analysts are awash in data, both regarding banks’ financial performance but also market data regarding publicly traded banks’ valuation. Table 1 presents a breakdown by trading market of the number of listed banks in November 2019.

To narrow this surfeit of comparable company data, analysts often screen the publicly traded bank universe based on characteristics such as the following:

  • Size, such as total assets or market capitalization
  • Profitability, such as return on assets or return on equity
  • Location
  • Asset quality
  • Revenue mix, such as the proportion of revenue from loan sales or asset management fees
  • Balance sheet composition, such as the proportion of loans or dependence on wholesale funding
  • Trading market or volume

Even after applying screens similar to the preceding, it remains doubtful that the publicly traded banks will exactly mirror the subject bank’s characteristics. This is especially true when valuing smaller community banks, as a relatively limited number of publicly traded banks exist with assets of less than $500 million that trade in more liquid markets. Ultimately, the analyst must determine an appropriate valuation multiple based on the subject bank’s perceived growth opportunities and risk attributes relative to the public companies. For example, analysts can compare the subject bank’s historical and projected EPS growth rates against the public companies’ EPS growth rates, with a materially lower growth outlook for the subject bank suggesting a lower pricing multiple.

Part 1 of this community bank valuation series described various valuation metrics applicable to banks, most prominently earnings and tangible book value. It is important to reiterate that while bankers and analysts often reference price/tangible book value multiples, the earning power of the institution drives its value. Chart 1 illustrates this point, showing that price/tangible book value multiples rise along with the core return on tangible common equity. This chart includes banks traded on the NASDAQ, NYSE, or NYSEAM with assets between $1 and $10 billion.

Since banking is a more mature industry, bank price/earnings multiples tend to vary within a relatively tight range. Chart 2 provides some perspective on historical price/earnings and price/tangible book value multiples, which includes banks traded on the NASDAQ, NYSE, or NYSEAM with assets between $1 and $10 billion and a return on core tangible common equity between 5% and 15%. Trading multiples in the first several years of the analysis may be distorted by recessionary conditions, while the multiples reported for 2016 and 2017 were exaggerated by optimism regarding the potential, at that time, for tax and regulatory reform. The diminished multiples at yearend 2018 and September 30, 2019 reflect a challenging interest rate environment, marked by a flat to inverted yield curve, and the possibility for rising credit losses in a cooling economy.

Discounted Cash Flow Method

The discounted cash flow (DCF) method relies upon three primary inputs:

  • A projection of cash flows distributable to investors over a finite time period » A terminal, or residual, value representing the value of all cash flows occurring after the end of the finite forecast period
  • A discount rate to convert the discrete cash flows and terminal value to present value

1. Cash Flow

First, a few suggestions regarding projections:

  • For a financial institution, projecting an income statement without a balance sheet usually is inadvisable, as this obscures important linkages between the two financial statements. For example, the bank’s projected net interest income growth may require a level of loan growth not permitted by the bank’s capital resources.
  • Including a roll-forward of the loan loss reserve illustrates key asset quality metrics, such as the ratios of loan charge-offs to loans and loan loss reserves to loans. The level of charge-offs should be assessed against the bank’s historical performance and the economic outlook.
  • Key financial metrics, both for the balance sheet and income statement, should be assessed against the bank’s historical performance and peer banks.
  • While projections can be prepared on a consolidated basis, we prefer developing separate projections for the bank and its holding company. This makes explicit the relationships between the two entities, such as the holding company’s reliance on the bank for cash flow. For leveraged holding companies, a sources and uses of funds schedule is useful.

In preparing a DCF analysis for a bank, the most meaningful cash flow measure is distributable tangible equity. The analyst sets a threshold ratio of tangible common equity/tangible assets or another regulatory capital ratio based on management’s expectations, regulatory requirements, and/or peer and publicly traded comparable company levels. Equity generated by the bank above this target level is assumed to be distributed to the holding company. After determining the holding company’s expenses and debt service requirements, the remaining amount represents shareholder cash flow, which then is captured in the DCF valuation analysis.

2. Discount Rate

For a financial institution, the discount rate represents the entity’s cost of equity. Outside the financial services industry, analysts most commonly employ a weighted average cost of capital (WACC) as the discount rate, which blends the cost of the company’s debt and equity funding. However, banks are unique in that most of their funding comes from deposits, and the cost of deposits does not rise along with the entity’s risk of financial distress (because of FDIC insurance). Therefore, a significant theoretical underpinning for using a WACC – that the cost of debt increases along with the entity’s risk of default – is undermined for a bank. Analytical consistency is created in a DCF analysis by matching a cash flow to equity investors (i.e., dividends) with a cost of equity.

A bank’s cost of equity can be estimated based on the historical excess returns generated by equity investments over Treasury rates, as adjusted by a “beta” metric that captures the volatility of bank stocks relative to the broader market. Analysts may also consider entity-specific risk factors – such as a concentration in a limited geographic market, elevated credit quality concerns, and the like – that serve to distinguish the risk faced by investors in the subject institution relative to the norm for publicly traded banks from which cost of equity data is derived.

3. Terminal Value

The terminal value is a function of a financial metric at the end of the forecast period, such as net income or tangible book value, and an appropriate valuation multiple. Two techniques exist to determine a terminal value multiple. First, the Gordon Growth Model develops an earnings multiple using (a) the discount rate and (b) a long-term, sustainable growth rate. Second, as illustrated in Chart 2, bank pricing multiples tend to vary within a relatively tight range, and a historical average trading multiple can inform the terminal value multiple selection.

Correlating the Analysis

In most analyses, the values derived using the market and income approaches will differ. Given a range, an analyst must consider the strengths and weaknesses of each indicated value to arrive at a final concluded value. For example, earnings based indications of value derived using the market approach may be more relevant in “normal” times, as the values are consistent with investors’ orientation towards earnings as the ultimate source of returns (either dividends or capital appreciation). However, in more distressed times when earnings are depressed, indications of value using book value assume more relevance. If a bank has completed a recent acquisition or is in the midst of a strategic overhaul, then the discounted cash flow method may deserve greater emphasis. We prefer to assign quantitative weights to each indication of value, which provide transparency into the process by which value is determined.

Relative Value Analysis

The analysis is not complete, however, when a correlated value is obtained. It is crucial to compare the valuation multiples implied by the concluded value, such as the effective price/earnings and price/tangible book value multiples, against those reported by publicly traded banks. Any divergences should be explainable. For example, if the bank operates in a market with constrained growth prospects, then a lower than average price/earnings multiple may be appropriate. A higher return on equity for a subject bank, relative to the comparable companies, often results in a higher price/tangible book value multiple. As another reference point, the effective pricing multiples may be benchmarked against bank merger and acquisition pricing to ensure that an appropriate relationship exists between the subject minority interest value and a possible merger value.

Conclusion

There are many valuation issues that remain untouched by this article in the interest of brevity, such as the valuation treatment of S corporations and the discount for lack of marketability applicable to minority interests in banks with no active trading market. Instead, this article addresses issues commonly faced in valuing minority interests in any community bank. A well-reasoned valuation of a community bank requires understanding the valuation conventions applicable to banks, such as pricing multiples commonly employed or the appropriate source of cash flow in a DCF analysis, but within a risk and growth framework that underlies the valuation of all equity instruments. Relating these valuation parameters to a comprehensive analysis of a bank’s financial performance, risk factors, and strategic outlook results in a rigorous and convincing determination of value. In the next edition, we will move beyond the valuation of minority interests in banks, focusing on specific valuation nuances that arise when engaging in a valuation for merger purposes.


Originally published in Bank Watch, November 2019.

Kress v. U.S.

Scott A. Womack, ASA, MAFF, Senior Vice President, originally presented the session “Will Kress v. U.S. Change Your Life? Or Will It Change Your Valuation Practice?” at the Forensic and Valuation Services Conference hosted by the Tennessee Society of CPAs on October 23, 2019.

Should Kress be immediately considered as support for tax-affecting earnings of a pass-through entity? In this session, Scott Womack tackles that question by taking a deep dive into the case including viewpoints from the participating experts, presenting the views of other commentators, and discussing the appeal by the IRS and its withdrawal.

Key learning objectives include:

  • Understand the key issues of Kress and their importance to your practice
  • Review of arguments regarding tax-affecting vs. not tax-affecting
  • Have knowledge of the range of opinions re Kress
  • Understand how Kress impacts valuation of pass-through entities

 

Critical Issues in Valuation & Family Transitions | Auto Dealers

Scott A. Womack, ASA, MAFF, Senior Vice President, originally presented the session “Critical Issues in Business Valuations and Family Transitions for Auto Dealers” at the 2019 Lane Gorman Trubitt Controllers’ Roundtable on October 17, 2019.

Auto dealers, like most business owners, can actually influence the value of their store.  What are some of the value drivers of a store valuation and areas that appraisers adjust for in business valuations?  Mr. Womack presents on these topics and discusses key elements of buy-sell agreements and other family transition issues that he observes from his auto dealer practice.

 

2019 Core Deposit Intangibles Update

In our annual update of core deposit trends published a year ago, we described an increasing trend in core deposit intangible asset values in light of rising interest rates. At the time several more short-term rate hikes by the Fed were expected during 2018 and 2019. However, the equity and high yield credit markets disagreed as both fell sharply during the fourth quarter in anticipation of the December rate hike that the Fed later implemented.

A year later the Fed has cut three times in 2019 and thereby erased three of the four hikes it implemented in 2018. As 2019 unfolded, intermediate- and long-term U.S. Treasury rates declined from what appears to be cycle highs reached in November 2018 through August 2019. As a result, the U.S. Treasury curve inverted with short-rates that are closely tied to the Fed’s policy rates exceeding intermediate- and long-term rates. By late August 3-month bills yielded about 50bps more than the 10-year bond. Also, the spread between 10-year and 2-year Treasuries, commonly cited as an indicator of impending recessions when negative, was nominally negative. During October intermediate- and long-term rates rose modestly in anticipation of the third Fed rate cut supporting economic growth and thereby flattened the curve.

Alongside these fluctuations in the interest rate environment, the banking industry has seen increasing competition for deposits in recent years. Improved loan demand in the post-recession period has led to greater funding needs, while competition from traditional banking channels has been compounded by the increased prevalence of online deposit products, often offering higher rates. All of these trends have combined to make strong core deposit bases increasingly valuable in bank acquisitions in the post-recession years. One question to ponder, however, is how much the value attributable to core deposits may ease given the reduction in rates that has occurred recently.

Using data compiled by S&P Global Market Intelligence, we analyzed trends in core deposit intangible (CDI) assets recorded in whole bank acquisitions completed from 2000 through September 2019. CDI values represent the value of the depository customer relationships obtained in a bank acquisition. CDI values are driven by many factors, including the “stickiness” of a customer base, the types of deposit accounts assumed, and the cost of the acquired deposit base compared to alternative sources of funding.

For our analysis of industry trends in CDI values, we relied on S&P Global Market Intelligence’s definition of core deposits.1 In analyzing core deposit intangible assets for individual acquisitions, however, a more detailed analysis of the deposit base would consider the relative stability of various account types. In general, CDI assets derive most of their value from lower-cost demand deposit accounts, while often significantly less (if not zero) value is ascribed to more rate-sensitive time deposits and public funds, or to non-retail funding sources such as listing service or brokered deposits which are excluded from core deposits when determining the value of a CDI.

Trends in CDI Values

Figure 2 summarizes the trend in CDI values since the start of the 2008 recession, compared with rates on 5-year FHLB advances. Over the post-recession period, CDI values have largely followed the general trend in interest rates. As alternative funding became more costly during 2017 and 2018, CDI values generally ticked up as well, relative to post-recession average levels. During 2019, the trend reversed as CDI values have exhibited a declining trend in light of yield curve inversion and Fed rate cuts at its last three meetings.

This decline in CDI values has been somewhat slower than the drop in benchmark interest rates, however, in part because deposit costs typically lag broader movements in market interest rates. In general, banks were slow to raise deposit rates in the period of contractionary monetary policy through 2018 and, as a result, rates remain below benchmark levels leaving banks less room to reduce rates further. For CDs, the lagging trend is even more pronounced given their nature as time deposits. Many banks attempted to “lock-in” rates by increasing reliance on CDs when expectations were for continued rate increases as late as year-end 2018. Now that rates are on the decline, banks have been stuck with CDs that cannot be repriced until their maturities even as benchmark rates fall. While time deposits typically are not considered “core deposits” in an acquisition and thus would not directly influence CDI values, they do significantly influence a bank’s overall cost of funds, and while funding costs remain high a strong core deposit base remains a valuable asset to acquirers.

Even as CDI assets remain above post-recession average levels at approximately 2.0- 2.5%, they are still below long-term historical levels which averaged closer to 2.5-3.0% in the early 2000s.

Accounting for CDI Assets

Based on the data for acquisitions for which core deposit intangible detail was reported, a majority of banks selected a ten-year amortization term for the CDI values booked. Less than 10% of transactions for which data was available selected amortization terms longer than ten years. Amortization methods were somewhat more varied, but an accelerated amortization method was selected in more than half of these transactions.

Trends in Deposit Premiums Relative to CDI Asset Values

Core deposit intangible assets are related to, but not identical to, deposit premiums paid in acquisitions. While CDI assets are an intangible asset recorded in acquisitions to capture the value of the customer relationships the deposits represent, deposit premiums paid are a function of the purchase price of an acquisition. Deposit premiums in whole bank acquisitions are computed based on the excess of the purchase price over the target’s tangible book value, as a percentage of the core deposit base. While deposit premiums often capture the value to the acquirer of assuming the established funding source of the core deposit base (that is, the value of the deposit franchise), the purchase price also reflects factors unrelated to the deposit base, such as asset quality in the acquired loan base, unique synergy opportunities anticipated by the acquirer, etc. Any additional factors may influence the purchase price to an extent that the calculated deposit premium doesn’t necessarily bear a strong relationship to the value of the core deposit base to the acquirer. This influence is often less relevant in branch transactions where the deposit base is the primary driver of the transaction and the relationship between the purchase and the deposit base is more direct.

Deposit premiums paid in whole bank acquisitions have shown more volatility than CDI values. Despite improved deal values in recent years, current deposit premiums in the high single digits remain well below the pre-financial crisis levels when premiums for whole bank acquisitions averaged closer to 20%.

Deposit premiums paid in branch transactions have generally been less volatile than tangible book value premiums paid in whole bank acquisitions. Branch transaction deposit premiums have averaged in the 5.5%-7.5% range during 2019, up from the 2.0-4.0% range observed in the financial crisis, and have continued to rise in recent quarters in light of increasing deposit competition.

For more information about Mercer Capital’s core deposit valuation services, please contact us.


Originally published in Bank Watch, October 2019.

Valuation Issues in Auto Dealer Litigation

In our family law and commercial litigation practice, we often serve as expert witnesses in auto dealership valuation disputes. We hope you never find yourself a party to a legal dispute; however, we offer the following words of wisdom based upon our experience working in these valuation-related disputes. The following topics, posed as questions, have been points of contention or common issues that have arisen in recent disputes. We present them here so that if you are ever party to a dispute, you will be a more informed user of valuation and expert witness services.           

Should Your Expert Witness be a Valuation or Industry Expert?

Oftentimes, the financial and business valuation portion of a litigation is referred to as a “battle of the experts” because you have at least two valuation experts, one for the plaintiff and one for the defendant.  In the auto dealer world, you are hopefully combining valuation expertise with a highly-specialized industry.  It is critical to engage an expert who is both a valuation expert and an industry expert – one who holds valuation credentials and has deep valuation knowledge and also understands and employs accepted industry-specific valuation techniques.  Look with caution upon valuation experts with minimal industry experience who utilize general valuation methodologies often reserved for other industries (for example, Discounted Cash Flow (DCF)  or multiples of Earnings Before Interest, Taxes and Depreciation (EBITDA)) with no discussion of Blue Sky multiples.                      

Does the Appraisal Discuss Local Economic Conditions and Competition Adequately?

The auto industry, like most industries, is dependent on the climate of the national economy.  Additionally, auto dealers can be dependent or affected by conditions that are unique to their local economy.  The type of franchise relative to the local demographics can also have a direct impact on the success/profitability of a particular auto dealer.  For example, a luxury or high-line franchise in a smaller or poorer market would not be expected to fare as well as one in a market that has a larger and wealthier demographic.

In those areas that are dependent on a local economy/industry, an understanding of that economy/industry becomes just as important as an understanding of the overall auto dealer industry and national economy.  Common examples are local markets that are home to a military base, oil & gas markets in Western Texas or natural gas in Pennsylvania, or fishing industries in coastal areas. There’s also a fine balance between understanding and acknowledging the impact of that local economy without overstating it.  Often some of the risks of the local economy are already reflected in the historical operating results of the dealership.

If There Are Governing Corporate Documents, What Do They Say About Value, and Should They Be Relied Upon?

Many of the corporate entities involved in litigation have sophisticated governance documents that include Operating Agreements, Buy-Sell Agreements, and the like. These documents often contain provisions to value the stock or entity through the use of a formula or process.  Whether or not these agreements are to be relied upon in whole or in part in a litigated matter is not always clear. In litigated matters, focus will be placed on whether the value concluded from a governance document represents fair market value, fair value, or some other standard of value.  However, the formulas contained in these agreements are not always specific to the industry and may not include accepted valuation methodology for auto dealers.

Two common questions that arise concerning these agreements are 1) has an indication of value ever been concluded using the governance document in the dealership’s history (in other words, has the dealership been valued using the methodology set out in the document)?; and 2) have there been any transactions, buy-ins or redemptions utilizing the values concluded in a governance document?  These are important questions to consider when determining the appropriate weight to place on a value indication from a governance document.

Some litigation matters (such as divorce) state that the non-business party to the litigation is not bound by the value indicated by the governance document since they were not a signed party to that particular agreement.   It is always important to discuss this issue with your attorney.

Have There Been Prior Internal Transactions of Company Stock and at What Price?

Similar to governance documents, another possible data point(s) in valuing an auto dealership are internal transactions. A good appraiser will always ask if there have been prior transactions of company stock and, if so, how many have occurred, when did they occur, and at what terms did they occur? There is no magic number, but as with most statistics, more transactions closer to the date of valuation can often be considered as better indicators of value than fewer transactions further from the date of valuation.

An important consideration in internal transactions is the motivation of the buyer and seller. If there have been multiple internal transactions, appraisers have to determine the appropriateness of which transactions to possibly include and which to possibly exclude in their determination of value. Without an understanding of the motivation of the parties and of the specific facts of the transactions, it becomes trickier to include some, but exclude others.  The more logical conclusion would be to include all of the transactions or exclude all of the transactions with a stated explanation.   

What Do the Owner’s Personal Financial Statements Say and Are They Important?

Most owners of an auto dealership have to submit personal financial statements as part of the guarantee on the floor plan and other financing.  The personal financial statement includes a listing of all of the dealer’s assets and liabilities, typically including some value assigned to the value of the dealership. In litigated matters, these documents are important as another data point to valuation.

One view of the value placed on a dealership in an owner’s personal financial statement is that no formal valuation process was used to determine that number; so, at best, it’s a thumb in the air, blind estimate of value.  The opposing view is the individual submitting the personal financial statement is attesting to the accuracy and reliability of the financial figures contained in document under penalty of perjury.  Further, some would say that the value assigned to the dealership has merit because the business owner is the most informed person regarding the business, its future growth opportunities, competition, and the impact of economic and industry factors on the business.

For an appraiser, it’s not a good situation to be surprised by the existence of these documents. A good business appraiser will always ask for them.  The dealership value indicated in a personal financial statement should be viewed in light of value indications under other methodologies and sources of information.  At a minimum, personal financial statements may require the expert to ask more questions or use other factors, such as the national and local economy, to explain any difference in values over time.      

Does the Appraiser Understand the Industry and How to Use Comparable Industry Profitability Data?

The auto dealer industry is highly specialized and unique and should not be compared to general retail or manufacturing industries.  As such, any sole comparison to general industry profitability data should be avoided. 

If your appraiser solely uses the Annual Statement Studies provided by the Risk Management Association (RMA) as a source of comparison for the balance sheet and income statement of your dealership to the industry, this is problematic.  RMA’s studies are organized by the North American Industry Classification System (NAICS).  Typical new and used retail auto dealers would fall under NAICS #441110 or #441120. This general data does not distinguish between different franchises.

Is there better or more specialized data available? Yes, the National Automobile Dealers Association (NADA) publishes monthly Dealership Financial Profiles broken down by Average Dealerships, which would be comparable to RMA data.  However, NADA drills down further, segmenting the industry into the four following categories: Domestic Dealerships, Import Dealerships, Luxury Dealerships and Mass Market Dealerships.  While no single comparison is perfect, an appraiser should know to consult more specific industry profitability data when available.

Do You Understand Actual Profitability vs. Expected Profitability and Why It’s Important?

Either through an income or Blue Sky approach, auto dealers are typically valued based upon expected profitability rather than the actual profitability of the business.

The difference between actual and expected profitability generally consists of normalization adjustments. Normalization adjustments are adjustments made for any unusual or non-recurring items that do not reflect normal business operations. During the due diligence interview with management, an appraiser should ask does the dealership have non-recurring or discretionary expenses and are personal expenses of the owner being paid by the business? Comparing the dealership to industry profitability data as discussed earlier can help the appraiser understand the degree to which the dealership may be underperforming.

An example of how normalizing adjustments work is helpful. If a dealership has historically reported 2% earnings before taxes (EBT) and the NADA data suggests 5%, the financial expert must analyze why there is a difference between these two data points and determine if there are normalizing adjustments to be applied. Let’s use some numbers to illustrate this point.  For a dealership with revenue of $25 million, historical profitability at 2% would suggest EBT of $500,000.  At 5%, expected EBT would be $1,250,000, or an increase of $750,000. In this case, the financial expert should analyze the financial statements and the dealership to determine if normalization adjustments are appropriate which, when made, will reflect a more realistic figure of the expected profitability of the dealership without non-recurring or personal owner expenses. This is important because, hypothetically, a new owner could optimize the business and eliminate some of these expenses; therefore, even dealerships with a history of negative or lower earnings can receive higher Blue Sky multiples because a buyer believes they can improve the performance of the dealership. However, as noted earlier, the dealership may be affected by the local economy and other issues that cannot be fixed so the lower historical EBT may be justified.

For more information on normalizing adjustments, see our article Automobile Dealership Valuation 101.

Conclusion

The valuation of automobile dealerships can be complex. A deep understanding of the industry along with valuation expertise is the optimal combination for general valuation needs and certainly for valuation-related disputes. If you have a valuation issue, feel free to contact us to discuss it in confidence.

 

Originally published in the Value Focus: Auto Dealer Industry Newsletter, Mid-Year 2019.

Five Trends to Watch in the Medical Device Industry

The medical device manufacturing industry produces equipment designed to diagnose and treat patients within global healthcare systems.  Medical devices range from simple tongue depressors and bandages, to complex programmable pacemakers and sophisticated imaging systems.  Major product categories include surgical implants and instruments, medical supplies, electro-medical equipment, in-vitro diagnostic equipment and reagents, irradiation apparatuses, and dental goods.

The following outlines five structural factors and trends that influence demand and supply of medical devices and related procedures.

1.    Demographics

The aging population, driven by declining fertility rates and increasing life expectancy, represents a major demand driver for medical devices.  The U.S. elderly population (persons aged 65 and above) totaled 49 million in 2016 (15% of the population).   The U.S. Census Bureau estimates that the elderly will roughly double by 2060 to 95 million, representing 23% of the total population.

The elderly account for nearly one third of total healthcare consumption.  Personal healthcare spending for the population segment was $19,000 per person in 2014, five times the spending per child ($3,700) and almost triple the spending per working-age person ($7,200).

According to United Nations projections, the global elderly population will rise from approximately 607 million (8.2% of world population) in 2015 to 1.8 billion (17.8% of world population) in 2060.  Europe’s elderly are projected to reach approximately 29% of the population by 2060, making it the world’s oldest region.  While Latin America and Asia are currently relatively young, these regions are expected to undergo drastic transformations over the next several decades, with the elderly population expected to expand from less than 8% in 2015 to more than 21% of the total population by 2060.

2.    Healthcare Spending and the Legislative Landscape in the U.S.

Demographic shifts underlie the expected growth in total U.S. healthcare expenditure from $3.5 trillion in 2017 to $6.0 trillion in 2027, an average annual growth rate of 5.5%. While this projected average annual growth rate is more modest than that of 7.0% observed from 1990 through 2007, it is more rapid than the observed rate of 4.3% between 2008 and 2017.  Projected growth in annual spending for Medicare (7.9%) is expected to contribute substantially to the increase in national health expenditure over the coming decade.  Healthcare spending as a percentage of GDP is expected to expand from 17.9% in 2017 to 19.4% by 2027.

Since inception, Medicare has accounted for an increasing proportion of total U.S. healthcare expenditures.  Medicare currently provides healthcare benefits for an estimated 60 million elderly and disabled people, constituting approximately 15% of the federal budget in 2018.  Medicare represents the largest portion of total healthcare costs, constituting 20% of total health spending in 2017.  Medicare also accounts for 25% of hospital spending, 30% of retail prescription drugs sales, and 23% of physician services.

Owing to the growing influence of Medicare in aggregate healthcare consumption, legislative developments can have a potentially outsized effect on the demand and pricing for medical products and services.  Net mandatory benefit outlays (gross outlays less offsetting receipts) to Medicare totaled $591 billion in 2017, and are expected to reach $1.3 trillion by 2028.

The Patient Protection and Affordable Care Act (“ACA”) of 2010 incorporated changes that are expected to constrain annual growth in Medicare spending over the next several decades, including reductions in Medicare payments to plans and providers, increased revenues, and new delivery system reforms that aim to improve efficiency and quality of patient care and reduce costs.  On a per person basis, Medicare spending is projected to grow at 4.6% annually between 2017 and 2027, compared to 1.5% average annualized growth realized between 2010 and 2017, and 7.3% during the 2000s.

As part of ACA legislation, a 2.3% excise tax was imposed on certain medical devices for sales by manufacturers, producers, or importers.  The tax had become effective on December 31, 2012, but met resistance from industry participants and policy makers.  In late 2015, Congress passed legislation promulgating a two-year moratorium on the tax beginning January 2016.  In January 2018, the moratorium suspending the medical device excise tax was extended through 2019.

3.    Third-Party Coverage and Reimbursement

The primary customers of medical device companies are physicians (and/or product approval committees at their hospitals), who select the appropriate equipment for consumers (patients).  In most developed economies, the consumers themselves are one (or more) step removed from interactions with manufacturers, and therefore pricing of medical devices.  Device manufacturers ultimately receive payments from insurers, who usually reimburse healthcare providers for routine procedures (rather than for specific components like the devices used).  Accordingly, medical device purchasing decisions tend to be largely disconnected from price.

Third-party payors (both private and government programs) are keen to reevaluate their payment policies to constrain rising healthcare costs.  Several elements of the ACA are expected to limit reimbursement growth for hospitals, which form the largest market for medical devices. Lower reimbursement growth will likely persuade hospitals to scrutinize medical purchases by adopting i) higher standards to evaluate the benefits of new procedures and devices, and ii) a more disciplined price bargaining stance.

The transition of the healthcare delivery paradigm from fee-for-service (FFS) to value models is expected to lead to fewer hospital admissions and procedures, given the focus on cost-cutting and efficiency.  In 2015, the Department of Health and Human Services (HHS) announced goals to have 85% and 90% of all Medicare payments tied to quality or value by 2016 and 2018, respectively, and 30% and 50% of total Medicare payments tied to alternative payment models (APM) by the end of 2016 and 2018, respectively.  A report issued by the Health Care Payment Learning & Action Network (LAN), a public-private partnership launched in March 2015 by HHS, found that 34% of payments were tied to APMs, a 5% increase from 2016 to 2017.

Some expressed concern that the shift toward value-based care would encounter difficulties with the current administration.  In November 2017, the CMS partially canceled bundled payment programs for certain joint replacement and cardiac rehabilitation procedures.  However, indications are that the CMS supports value-based care and wants pilot programs to accelerate.  Ultimately, lower reimbursement rates and reduced procedure volume will likely limit pricing gains for medical devices and equipment.

The medical device industry faces similar reimbursement issues globally, as the EU and other jurisdictions face increasing healthcare costs, as well.  A number of countries have instituted price ceilings on certain medical procedures, which could deflate the reimbursement rates of third-party payors, forcing down product prices.  Industry participants are required to report manufacturing costs and medical device reimbursement rates are set potentially below those figures in certain major markets like Germany, France, Japan, Taiwan, Korea, China, and Brazil.  Whether third-party payors consider certain devices medically reasonable or necessary for operations presents a hurdle that device makers and manufacturers must overcome in bringing their devices to market.

4.    Competitive Factors and Regulatory Regime

Historically, much of the growth for medical technology companies has been predicated on continual product innovations that make devices easier for doctors to use and improve health outcomes for the patients.  Successful product development usually requires significant R&D outlays and a measure of luck.  However, viable new devices can elevate average selling prices, market penetration, and market share.

Government regulations curb competition in two ways to foster an environment where firms may realize an acceptable level of returns on their R&D investments.  First, firms that are first to the market with a new product can benefit from patents and intellectual property protection giving them a competitive advantage for a finite period.  Second, regulations govern medical device design and development, preclinical and clinical testing, premarket clearance or approval, registration and listing, manufacturing, labeling, storage, advertising and promotions, sales and distribution, export and import, and post market surveillance.

Regulatory Overview in the U.S.

In the U.S., the FDA generally oversees the implementation of the second set of regulations.  Some relatively simple devices deemed to pose low risk are exempt from the FDA’s clearance requirement and can be marketed in the U.S. without prior authorization.  For the remaining devices, commercial distribution requires marketing authorization from the FDA, which comes in primarily two flavors.

  • The premarket notification (“510(k) clearance”) process requires the manufacturer to demonstrate that a device is “substantially equivalent” to an existing device (“predicate device”) that is legally marketed in the U.S. The 510(k) clearance process may occasionally require clinical data, and generally takes between 90 days and one year for completion.  In November 2018, the FDA announced plans to change elements of the 510(k) clearance process.  Specifically, the FDA plan includes measures to encourage device manufacturers to use predicate devices that have been on the market for no more than 1o years.  The FDA also announced in its statements plans to finalize guidance establishing an alternative 510(k) pathway in early 2019.  This alternative pathway would allow manufacturers of certain “well-understood device types” to demonstrate substantial equivalence through objective safety and performance criteria.
  • The premarket approval (“PMA”) process is more stringent, time-consuming and expensive. A PMA application must be supported by valid scientific evidence, which typically entails collection of extensive technical, preclinical, clinical and manufacturing data.  Once the PMA is submitted and found to be complete, the FDA begins an in-depth review, which is required by statute to take no longer than 180 days.  However, the process typically takes significantly longer, and may require several years to complete.

Pursuant to the Medical Device User Fee Modernization Act (MDUFA), the FDA collects user fees for the review of devices for marketing clearance or approval.  The current iteration of the Medical Device User Fee Act (MDUFA IV) came into effect in October 2017.  Under MDUFA IV, the FDA is authorized to collect almost $1 billion in user fees, an increase of more than $320 million over MDUFA III, between 2017 and 2022.

Regulatory Overview Outside the U.S.

The European Union (EU), along with countries such as Japan, Canada, and Australia all operate strict regulatory regimes similar to that of the FDA, and international consensus is moving towards more stringent regulations.  Stricter regulations for new devices may slow release dates and may negatively affect companies within the industry.

Medical device manufacturers face a single regulatory body across the EU.   In order for a medical device to be allowed on the market, it must meet the requirements set by the EU Medical Devices Directive.  Devices must receive a Conformité Européenne (CE) Mark certificate before they are allowed to be sold in that market.  This CE marking verifies that a device meets all regulatory requirements, including EU safety standards.  A set of different directives apply to different types of devices, potentially increasing the complexity and cost of compliance.

5.    Emerging Global Markets

Emerging economies are claiming a growing share of global healthcare consumption, including medical devices and related procedures, owing to relative economic prosperity, growing medical awareness, and increasing (and increasingly aging) populations.  As global health expenditure continues to increase, sales to countries outside the U.S. represent a potential avenue for growth for domestic medical device companies.  According to the World Bank, all regions (except Sub-Saharan Africa and South Asia) have seen an increase in healthcare spending as a percentage of total output over the last two decades.

Global medical devices sales are estimated to increase 6.4% annually from 2016 to 2020, reaching nearly $440 billion according to the International Trade Administration. While the Americas are projected to remain the world’s largest medical device market, the Asia/Pacific and Western Europe markets are expected to expand at a quicker pace over the next several years.

Summary

Demographic shifts underlie the long-term market opportunity for medical device manufacturers.  While efforts to control costs on the part of the government insurer in the U.S. may limit future pricing growth for incumbent products, a growing global market provides domestic device manufacturers with an opportunity to broaden and diversify their geographic revenue base.  Developing new products and procedures is risky and usually more resource intensive compared to some other growth sectors of the economy.  However, barriers to entry in the form of existing regulations provide a measure of relief from competition, especially for newly developed products.

Community Bank Valuation (Part 3): Important Relationships Between a Bank and Its Holding Company

The August 2019 BankWatch described key considerations in analyzing the financial statements of banks. However, we did not address one crucial set of relationships – those between a bank holding company (“BHC”) and its subsidiary depository institution.

Most banks are owned by bank holding companies. While investors often state that they own an interest in a bank, this may not be legally precise. Usually, they own a share of stock in a bank holding company, which in turn owns a controlling interest in a subsidiary bank’s common stock. Where a bank holding company exists, this entity’s common stock generally is the subject of valuation analyses.

Part 3 of the Community Bank Valuation series explores important relationships between banks and their holding companies, focusing particularly on cash flow and leverage.

The Holding Company’s Balance Sheet

Compared to a bank’s balance sheet, a holding company’s balance sheet has fewer moving parts. The “left side” of its balance sheet, or its assets, usually is rather boring. The more intriguing analytical question, though, is how the bank holding company finances its investment in the bank. The following table presents a balance sheet for a BHC controlling 100% of the common stock of a bank with $500 million of total assets.

Usually, the holding company’s assets consist virtually entirely of its investment in its subsidiary bank or banks, which equals the bank’s total equity. The investment in the bank is carried at equity, meaning that it increases by the bank’s net income and decreases by dividends paid from the bank to the holding company, among other transactions. Other material assets may include:

  • Cash. BHCs with cash obligations paid at the holding company, such as interest payments or compensation, often will maintain a cash buffer to cover several months of operating expenses. In some cases, BHCs will maintain a larger cash position to react opportunistically if the bank subsidiary needs a capital injection for its growth or to repurchase BHC shares.
  • Other Assets. Non-bank assets typically are relatively modest and consist of investments in other entities (such as an insurance agency), intangible assets related to acquisitions that were not “pushed down” to the subsidiary, or facilities. In periods marked by higher levels of nonperforming assets, BHCs may hold problem assets, which is one strategy to reduce the bank’s classified asset/ capital ratio.

Interestingly, BHCs can borrow from banks – just not their bank subsidiary – and other capital providers. If the funds are downstreamed into the bank, the borrowings can be transformed from an instrument not includible in the BHC’s regulatory capital into Tier 1 capital at the bank. In order of seniority these funding sources include:

  • Bank Stock Loans. These loans are collateralized by the subsidiary bank’s stock and typically are obtained from another bank. As a secured borrowing, these loans generally have a lower cost than other alternatives. However, in the event of a default, the lender can foreclose on their collateral (i.e., the bank stock).
  • Subordinated Debt. After passage of the Dodd-Frank Act and the Basel III capital regulations, subordinated debt became a more prominent funding source, usually for organic growth or acquisitions. Various regulatory requirements govern subordinated debt offerings, but most community bank placements provide for a ten year term with the interest rate fixed for five years. The securities may be considered Tier 2 capital for the holding company.
  • Trust Preferred Securities (“TruPS”). TruPS were created in the 1990s to combine the Tier 1 capital treatment of preferred stock with the tax deductibility of debt. Rightly or wrongly, this instrument was viewed negatively by some regulators after the financial crisis, and the Basel III regulations effectively nullified new issuances. Many BHCs still hold grandfathered TruPS, though, which often do not mature until the 2030s. TruPS generally have interest rates that float with LIBOR, are subordinated to all other BHC obligations, and provide the issuer the right to defer payments for up to five years without triggering a default. TruPS count as Tier 1 capital for BHCs with under $15 billion in assets that are considered to be “large” BHCs that fie Y-9LP and Y-9C call reports with the Federal Reserve.

A BHC’s equity usually consists almost entirely of common stock, which generally must be the principal form of capitalization under BHC regulations. However, BHCs can issue preferred stock, and regulations view most favorably non-cumulative, perpetual preferred stock.

Analytical Considerations

Why do holding companies exist? First, they provide an efficient way to raise funds that can be injected as capital into the bank, thereby accommodating its organic growth. Second, they can facilitate acquisitions. Third, BHCs can more efficiently conduct shareholder transactions, such as repurchases.

By using leverage, a BHC can enhance the bank’s stand-alone return on equity (or exacerbate the ROE pressure arising from adverse financial scenarios). As indicated in Table 2, BHC leverage magnifies the subsidiary bank’s 12.0% ROE to 12.9% after considering the cost of the BHC’s debt.

As for a non-financial company, too much leverage can mean that the beneficial effect to shareholders of a higher ROE is swamped by the additional risk of financial distress. Various metrics exist to measure the holding company’s leverage, but one is the “double leverage” ratio, which is calculated as the investment in the bank subsidiary divided by the BHC’s equity. As indicated in Table 1 on page one, the BHC’s ratio is 113%, which is consistent with the median reported by all smaller BHCs at June 30, 2019 (112%, excluding some BHCs for which the BHC’s equity exceeds the bank investment).

Cash Flow

Unfortunately, BHC regulatory filings and audited financial statements do not provide a sources and uses of funds schedule, although some cash flow data is provided. Nevertheless, understanding the BHC’s obligations, and the cash required to service those obligations, is essential.

Sources of funds consist principally of the following:

  • Dividends from the bank subsidiary. The depth of this source of cash flow should be evaluated in light of the bank’s profitability, capital levels, and growth opportunities.
  • Debt issuances
  • Common stock sales
  • Intercompany payments. For example, the bank may reimburse the holding company for certain expenses paid by the BHC. Additionally, banks and BHCs often have tax-sharing arrangements. If the holding company incurs expenses, then it may realize an offsetting tax benefit.

Uses of funds include the following:

  • Debt service
  • Shareholder dividends
  • Share repurchases
  • Operating expenses. Expenses such as compensation, directors’ fees, and certain insurance premiums may be recorded by the holding company

Analysts should compare a bank’s ability to pay dividends, given its profitability level and need to retain earnings to fund its growth, against the BHC’s various claims on cash. Mismatches can sometimes arise due to changes in the bank’s performance or operating strategy. For example, consider a BHC that historically has paid high dividends to shareholders. If its subsidiary bank adopts a new strategic plan focused on organic growth, then the bank will need to retain earnings rather than pay dividends to the BHC and, ultimately, BHC shareholders. Additional borrowings could fund a short-term gap, but this is not a long-term solution to a BHC cash flow mismatch.

Two other special circumstances arise when analyzing BHC cash flow:

  • Acquisitions. Prior to entering into a transaction, the BHC’s plan for funding any cash consideration should evaluate the availability and desirability of dividends from the bank, debt offerings, and stock sales. Further, the cash acquired from the target BHC may provide another source of transaction funding.
  • S Corporations. Shareholders in an S corporation rely on the BHC for distributions to offset their pass-through tax liability, while the BHC in turn relies on the bank for dividends to fund those tax payments. There are no special capital rules at the bank level that provide flexibility regarding the payment of dividends to offset BHC shareholders’ tax liability when other restrictions on dividends may exist. That is, C corporation and S corporation banks face the same capital regulations. Boards of S corporations may desire to operate, at the margin, with a greater capital buffer to avoid a situation where the shareholders have taxable income but the BHC is unable to make distributions.

Capital

Capital requirements for BHCs vary based upon their asset size. Under current regulations, BHCs with assets below $3.0 billion are subject to the Federal Reserve’s Small Bank Holding Company Policy Statement. This regulation does not establish any specific minimum capital ratios for small BHCs; however, a debt/ equity ratio limitation exists for debt arising from acquisitions. Therefore, small BHCs have significant flexibility in managing their capital structure, although the Federal Reserve theoretically remains a check on their creativity.

Large BHCs are subject to the Basel III regulations, which involve capital ratios calculated based on Tier 1 and total capital. Tier 1 capital generally is limited to common equity, non-cumulative perpetual preferred stock, and grandfathered TruPS. In addition to the allowance for loan losses, Tier 2 capital may include subordinated debt. Large BHC management can balance these capital sources to minimize the BHC’s weighted average cost of capital, maintain flexibility for unexpected events or opportunities, and ensure compliance with regulatory expectations.

Conclusion

While the subsidiary bank receives most of the analytical attention, the holding company on a standalone (or parent company) basis should not be overlooked. This is particularly true if the holding company has significant obligations to service debt or pay other expenses. By understanding the linkages between the bank and holding company, analysts can better assess a BHC’s potential future returns to shareholders and risk factors posed by the BHC that could jeopardize those returns.


Originally published in Bank Watch, September 2019.

Context is Important When Considering Transaction Data Relevance

A Look at WeWork’s Failed IPO

In last quarter’s issue of Portfolio Valuation we raised the issue as to whether public market investors are more critical (or discerning) in establishing value than private equity investors.  The evidence this year largely is, yes—at least for companies where there is skepticism as to whether meaningful profitability can be achieved. 

Lyft, SmileDirectClub and Uber are examples of unicorns that saw share prices marked sharply lower after the IPO (Lyft, SDC) or during the roadshow (Uber); and The We Company’s planned IPO never occurred due to pushback by investors. At the other extreme is Beyond Meat, which as of early October had risen about six-fold from its May IPO.

The We Company’s (formerly “WeWork” and will be refered to in this article as WeWork) valuation journey is interesting (maybe even fascinating).

WeWork, which was founded in 2010, is a real estate company that signs long-term leases for pricey real estate that it refurbishes then releases the space short-term. The company describes itself somewhat differently as a “community company committed to maximum global impact.” 

The S-1 disclosed not only massive losses, but also significant corporate governance issues.  Year-to-date revenues through June 30, 2019 doubled to $1.5 billion from the comparable period in 2018, but the operating loss also doubled to $1.4 billion.  EBITDA for the six months was negative $511 million, while capex totaled $1.3 billion. 

That is a big hole to fill every six months before factoring in rapid growth to be financed.  Cash as of June 30 totaled $2.5 billion, while the capital structure entails a lot of debt and negative equity. 

From a valuation perspective, WeWork is problematic because operating cash flows are deep in the red with little prospect of turning positive anytime soon. Nonetheless, the increase in value private equity investors placed on the company was astounding. 

The company pierced the unicorn threshold in early 2014 when affiliates of JPMorgan invested $150 million in the fourth funding at a post-raise $1.5 billion valuation. T. Rowe Price and Goldman Sachs invested $434 million in late 2014, which resulted in a post raise valuation of $10 billion. 

The 7th and 8th funding rounds are where the valuation really gets interesting.  In August 2017 SoftBank Vision Fund invested $3.1 billion, which implied a valuation of $21 billion.  SoftBank Group Corp., which sponsors the Vision Fund, invested $4.0 billion in January 2019 at an implied valuation of $47 billion. 

When the underwriters were forced to pull the plug on the IPO the targeted post-raise valuation reportedly was $10 billion to $15 billion—a value the company apparently was willing to accept because it needs the cash.

We do not know exactly how private equity investors valued the company.  Presumably discounted cash flow (DCF), guideline public company and guideline transaction methods were used, perhaps overlaid with a Monte Carlo simulation.   

The valuation history raises an important question: how was a stupendous valuation achieved in the private markets by a cash incinerator such as WeWork? A similar question could be asked about many high-profile PE-backed investments.

The short answer is that Softbank thinks the valuation increased significantly even though the company’s fundamentals argue otherwise. 

Prospective investors such as the public ones who were offered WeWork shares in an IPO could prepare their own DCF forecast to value the company.  They also could examine past transactions in the company for relevant valuation information. 

Likewise, they could examine capital transactions in similar companies.  Both sets of data fall under the guideline transaction method. 

A transaction in a privately held company infers a meaningful data point about value to investors, but there are a couple of caveats.  One is an assumption that both parties are fully-informed and neither is forced to transact.  Great values were realized by those willing to buy during the 2008 meltdown because there were so many forced sellers that ran the gamut from levered credit investors forced to dump bonds to the likes of Wachovia Corporation and National City Corporation. The price data was legitimate, but many sellers faced margin calls and had to dump assets into an illiquid market.  Is the valuation data relevant if “normal” market conditions prevail?

The second issue relates to private equity valuation generally, but especially those where start-up losses and ongoing capital requirements can be huge.  The valuation issue relates to using transaction data from investments in other money losing enterprises.  Is it always valid to apply multiples paid by investors in a funding round of a money-losing business to value another money-losing business? The valuation data may be factual, but it may be nonsense when weighed against the business’ operating and financial performance.

One can question Softbank’s motives.  Did Softbank need a higher valuation to offset losses in other parts of the portfolio in order to maintain investor and lender confidence?  Was a higher valuation necessary to support upcoming capital raises? We do not know, but prospective public investors were dismissive of Softbank’s valuations and they appear to be dismissive of the prior two raises given how low the price talk had fallen by the time the IPO was pulled. 

We at Mercer Capital respect markets and the pricing information that is conveyed.  The prices at which assets transact in private and public markets are critical observations; however, so too are a subject company’s underlying fundamentals, especially the ability to produce positive operating cash flow and a return on capital that at least approximates the cost of capital provided.

Mercer Capital can assist with the valuation of your portfolio companies.  We value hundreds of debt and equity securities of privately held companies every year and have been doing so for nearly four decades.  Please call if we can assist in the valuation of your portfolio companies.


Originally published in Portfolio Valuation Newsletter, October 2019.

Valuation Assumptions Influence Valuation Conclusions: How to Understand the Reasonableness of Individual Assumptions and Conclusions

In contested divorces where one or both spouses own a business or a business interest with significant value, it is common for one or both parties to retain a business appraiser to value the marital business interest(s). It is not unusual for the valuation conclusions of the two appraisers to differ significantly, with one significantly lower/higher than the other.

What is a client, attorney, or judge to think when significantly different valuation conclusions are present? The answer to the reasonableness of one or both conclusions lies in the reasonableness of the appraisers’ assumptions. However, valuation is more than “proving” that each and every assumption is reasonable. Valuation also involves proving the overall reasonableness of an appraiser’s conclusion.

A short example will illustrate this point and then we can address the issue of individual assumptions. In the following example, we see three potential discount rates and resulting price/earnings (“P/E”) multiples. Let’s assume that for the subject company in this example, there is significant market evidence suggesting that similar companies trade at a P/E in the neighborhood of 10x earnings.

In the figure below, we look at the assumptions used by appraisers to “build” discount rates. We show differing assumptions regarding four of the components, and none of the differing assumptions seems to be too far from the others. So, we vary what are called the equity risk premium (“ERP”), the beta statistic, which is a measure of riskiness, the small stock premium (“SSP”), and company-specific risk.

The left column (showing the low discount rate of 9.6% and a high P/E multiple of 15.2x) would yield the highest valuation conclusion. The right column (showing the high discount rateof 16.6% and the low P/E of 7.4x) would yield a substantially lower conclusion. That range is substantial and results in widely differing conclusions.

However, as stated earlier, market evidence suggests that companies like our example are worth in the range of a 10x earnings. In our example, the assumptions leading to a P/E in the range of 10x are found in the middle column.

In either case, appraisers might have made a seemingly convincing argument that each of their assumptions were reasonable and, therefore, that their conclusions were reasonable. However, the proof is in the pudding. Neither the low nor the high examples yield reasonable conclusions when viewed in light of available market evidence.

So, as we discuss how to understand the reasonableness of individual valuation assumptions in divorce-related business appraisals, know also that the valuation conclusions must themselves be proven to be reasonable. That’s why we place a “test of reasonableness” in every Mercer Capital valuation report that reaches a valuation conclusion.

Now, we turn to individual assumptions.

Growth Rates

Growth rates can impact a valuation in several ways. First, growth rates can explain historical or future changes in revenues, earnings, profitability, etc. A long-term growth rate is also a key assumption in determining a discount rate and resulting capitalization rate.

Growth rates, as a measure of historical or future change in performance, should be explained by the events that have occurred or are expected to occur. In other words, an appraiser should be able to explain the specific events that led to a certain growth rate, both in historical financial statements and also in forecasts. Companies experiencing large growth rates from one year to the next should be able to explain the trends that led to the large changes, whether it is new customers, new products being offered, loss of a competitor, an early-stage company ramping up, or other pertinent factors. Large growth rates for an extended period of time should always be questioned by the appraiser as to their sustainability at those heightened levels.

A long-term growth rate is an assumption utilized by all appraisers in a capitalization rate. The long-term growth rate should estimate the annual, sustainable growth that the company expects to achieve. Typically, this assumption is based on a long-term inflation factor plus/minus a few percentage points. Be mindful of any very small, negative, or large long-term growth rate assumptions. If confronted with one, what are the specific reasons for those extreme assumptions?

Annualization

In the course of a business valuation, appraisers normally examine the financial performance of a company for a historical period of around five years, if available. Since business valuations are point-in-time estimates, the date of valuation may not always coincide with a company’s annual reporting period.

Most companies have financial software with the capability to produce a trailing twelve month (“TTM”) financial statement. A TTM financial statement allows an appraiser to examine a fullyear business cycle and is not as influenced by seasonality or cyclicality of operations and performance during partial fiscal years. The balance sheet may still reflect some seasonality or cyclicality. Note if the appraiser annualizes a short portion of a fiscal year to estimate an annual result. This practice could result in inflating or deflating expected results if there is significant seasonality or cyclicality present. At the very least, the annualized results should be compared with historical and expected future results in terms of implied margins and growth.

Forecasts

Depending on the industry or where the company is in its business life cycle, a forecast may be used in the valuation and the discounted cash flow method (“DCF”) may be used.

Most forecasts are provided to appraisers by company management. While appraisers do not audit financial information provided by companies, including forecasts, the results should not be blindly accepted without verification against the company’s and its industry’s performance.

During the due diligence process, appraisers should ask management if they prepare multiple versions of forecasts. They should also ask for prior years’ forecasts in order to assess how successful management has been in estimations as compared to actual financial results. Be mindful of appraisers that compile the forecasts themselves and make sure there is some discussion of the underlying assumptions.

Divorce Recession

“Divorce recession” is a term to describe a phenomenon that sometimes occurs when a business owner portrays doom and gloom in their industry and for current and future financial performance of the company. As with other assumptions, an appraiser should not blindly accept this outlook.

An appraiser should compare the performance of the company against its historical trends, future outlook, and the condition of the industry and economy, among other factors. Be cautious of an appraisal where the current year or ongoing expectations are substantially lower, or higher for that matter, than historical performance without a tangible explanation as to why.

Industry Conditions

Most formal business valuations should include a narrative describing the current and expected future conditions of the subject company’s industry. An important discussion is how those factors specifically affect the company. There could be reasons why the company’s market is experiencing things differently than the national industry. Industry conditions can provide qualitative reasons why and how the quantitative numbers for the company are changing. Look carefully at business valuations that do not discuss industry conditions or those where the industry conditions are contrary to the company’s trends.

Valuation Techniques Specific to the Subject Company’s Industry

Certain industries have specific valuation methodologies and techniques that are used in addition to general valuation methodologies. Several of these industries include auto dealers, banks, healthcare and medical practices, hotels, and holding companies. It may be difficult for a layperson reviewing a business valuation to know whether the methods employed are general or industry-specific techniques. An attorney or business owner should ask the appraiser how much experience they have performing valuations in a particular industry. Also inquire if there are industry-specific valuation techniques used and how those affect the valuation conclusion.

Risk Factors

Risk factors are all of the qualitative and quantitative factors that affect the expected future performance of a company. Simply put, a business valuation combines the expected financial performance of the subject company (earnings and growth) and its risk factors. Risk factors show up as part of the discount rate utilized in the business valuation.

Like growth rates, there is no textbook that lists the appropriate risk factors for a particular industry or company. However, there is a reasonable range for this assumption.

Be careful of appraisals that have an extreme figure for risk factors. Make sure there is a clear explanation for the heightened risk.

Multiples

Another typical component of a business valuation is the comparison and use of market multiples while utilizing the market approach. Multiples can explain value through revenues, profits, or a variety of performance measures. One critique of market multiples is the applicability of the comparable companies used to determine the multiples. Are those companies truly comparable to the subject company?

Also, how reliable is the underlying comparable company data? Is it dated? How much information on the comparable companies or transactions can be extracted from the source? This critique can be fairly subjective to the layperson.

Another critique could be the range of multiples examined and how they are applied to the subject company. As we have discussed, take note of an appraisal that applies the extreme bottom or top end of the range of multiples, or perhaps even a multiple not in the range. Be prepared to discuss the multiple selected and how the subject company compares to the comparable companies selected.

Time Periods Considered

Earlier we stated that a typical appraisal provides the prior five years of the company’s financial performance, if available. Be cautious of appraisals that use a small sample size, e.g. the latest year’s results, as an estimate of the subject company’s ongoing earnings potential without explanation. The number of years examined should be discussed and an explanation as to why certain years were considered or not considered should be offered.

Some industries have multi-year cycles (further evidence of the importance of a discussion of industry conditions and consideration of recognized industry-specific techniques in the appraisal).

The examination of one year or a few years (instead of five years) can result in a much higher or lower valuation conclusion. If this is the case, it should be explained.

Conclusion

Business valuation is a technical analysis of methodologies used to arrive at a conclusion of value for a subject company. It can be difficult for a client, attorney, or judge to understand the impact of certain individual assumptions and whether or not those assumptions are reasonable. In addition to a review of individual assumptions, the valuation conclusion should be reasonable.

If the divorce case warrants, hire an appraiser to perform a business valuation. If the case or budget does not allow for a formal valuation, it may be helpful to hire an appraiser to review another appraiser’s business valuation at a minimum to help determine if the assumptions and conclusions are reasonable.


Originally published in Mercer Capital’s Tennessee Family Law Newsletter, Second Quarter 2019.

Community Bank Valuation (Part 2): Key Considerations in Analyzing the Financial Statements of a Bank

The June BankWatch featured the first part of a series describing key considerations in the valuation of banks and bank holding companies. While that installment provided a general overview of key concepts, this month we pivot to the analysis of bank financial statements and performance.1 Unlike many privately held, less regulated companies, banks produce reams of financial reports covering every minutia of their operations. For analytical personality types, it’s a dream.

The approach taken to analyze a bank’s performance, though, must recognize depositories’ unique nature, relative to non-financial companies. Differences between banks and non-financial companies include:

  1. Close interactions between the balance sheet and income statement. Banking revenues are connected tightly to the balance sheet, unlike for nonfinancial companies. In fact, you often can estimate a bank’s net income or the growth therein solely by reviewing several years of balance sheets. Banks have an “inventory” of assets that earn interest, referred to as “earning assets,” which drive most of their revenues. Earning assets include loans, securities (usually highly-rated bonds like Treasuries or municipal securities), and short-term liquid assets. Changes in the volume of assets and the mix of these assets, such as the relative proportions of lower yielding securities and higher yielding loans, significantly influence revenues.
  2. The value of liabilities. For non-financial companies, acquisition motivations seldom revolve around obtaining the target entity’s liabilities. The effective management of working capital and debt certainly influences shareholder value for non-financial companies, but few attempt to stockpile low-cost liabilities absent other business objectives. Banks, though, periodically buy and sell branches and their related deposits. The prices (or “premiums”) paid in these transactions reveal that bank deposits, the predominate funding source for banks, have discrete value. That is, banks actually pay for the right to assume another bank’s liabilities.

Why do banks seek to acquire deposits? First, all earning assets must be funded; otherwise, the balance sheet would fail to balance. Ergo, more deposits allow for more earning assets. Second, retail deposits tend to cost less than other alternative sources of funds. Banks have access to wholesale funding sources, such as brokered deposits and Federal Home Loan Bank advances, but these generally have higher interest rates than retail deposits. Third, retail deposits are stable, due to the relationship existing between the bank and customer. This provides assurance to bank managers, investors, and regulators that a disruption to a wholesale funding source will not trigger a liquidity shortfall. Fourth, deposits provide a vehicle to generate noninterest income, such as service charges or interchange. The strength of a bank’s deposit portfolio, such as the proportion of noninterest-bearing deposits, therefore influences its overall profitability and franchise value.

  1. Capital Adequacy. In addition to board and shareholder preferences, nonfinancial companies often have debt covenants that constrain leverage. Banks, though, have an entire multi-pronged regulatory structure governing their allowable leverage. Shareholders’ equity and regulatory capital are not the same; however, the computation of regulatory capital begins with shareholders’ equity. Two types of capital metrics exist – leverage metrics and risk-based metrics. The leverage metric simply divides a measure of regulatory capital by the bank’s total assets, while risk-based metrics adjust the bank’s assets for their relative risk. For example, some government agency securities have a risk weight equal to 20% of their balance, while many loans receive a risk weight equal to 100% of their balance.

Capital adequacy requirements have several influences on banks. Most importantly, failing to meet minimum capital ratios leads to severe repercussions, such as limitations on dividends and stricter regulatory oversight, and is (as you may imagine) deleterious to shareholder value. More subtly, capital requirements influence asset pricing decisions and balance sheet structure. That is, if two assets have the same interest rate but different risk weights, the value maximizing bank would seek to hold the asset with the lower risk weight. Stated differently, if a bank targets a specific return on equity, then the bank can accept a lower interest rate on an asset with a smaller risk weight and still achieve its overall return on equity objectives.

  1. Regulatory structure. In exchange for receiving a bank charter and deposit insurance, all facets of a bank’s operations are tightly regulated to protect the integrity of the banking system and, ultimately, the FDIC’s Deposit Insurance Fund that covers depositors of failed banks. Banks are rated under the CAMELS system, which contains categories for Capital, Asset Quality, Management, Earnings, Liquidity, and Sensitivity to Market Risk. Separately, banks receive ratings on information technology and trust activities. While a bank’s CAMELS score is confidential, these six categories provide a useful analytical framework for both regulators and investors.

Understanding the Balance Sheet

We now cover several components of a bank’s balance sheet.

Short-Term Liquid Assets and Securities

Banks are, by their nature, engaged in liquidity transformation, whereby funds that can be withdrawn on demand (deposits) are converted into illiquid assets (loans). Several alternatives exist to mitigate the risk associated with this liquidity transformation, but one universal approach is maintaining a portfolio of on-balance sheet liquid assets. Additionally, banks maintain securities as a source of earning assets, particularly when loan demand is relatively limited.

Liquid assets generally consist of highly-rated securities issued by the U.S. Treasury, various governmental agencies, and state and local governments, as well as various types of mortgage-backed securities. Relative to loans, banks trade off some yield for the liquidity and credit quality of securities. Key analytical considerations include:

  • Portfolio Size. While there certainly are exceptions, most high performing banks seek to limit the size of the securities portfolio; that is, they emphasize the liquidity features of the securities portfolio, while generating earnings primarily from the loan portfolio.
  • Portfolio Composition. The portfolio mix affects yield and risk. For example, mortgage-backed securities may provide higher yields than Treasuries, but more uncertainty exists as to the timing of cash flows. Also, the credit risk associated with any non-governmental securities, such as corporate bonds, should be identified.
  • Portfolio “Duration.” Duration measures the impact of different interest rate environments on the value of securities; it may also be viewed as a measure of the life of the securities. One way to enhance yield often is to purchase securities with longer durations; however, this increases exposure to adverse price movements if interest rates increase.

Loans

A typical bank generates most of its revenue from interest income generated by the loan portfolio; further, the lending function presents significant risk in the event borrowers fail to perform under the contractual loan terms. While loans are more lucrative than securities from a yield standpoint, the cost of originating and servicing a loan portfolio – such as lender compensation – can be significant. Key analytical considerations include:

  • Portfolio Composition. Bank financial statements include several loan portfolio categories, based on the collateral or purpose of each loan. Investors should consider changes in the portfolio over time and compare the portfolio mix to peer averages. Significant growth in a portfolio segment raises risk management questions, and regulatory guidance provides thresholds for certain types of real estate lending. Departures from peer averages may provide a sense of the subject bank’s credit risk, as well as the portfolio’s yield. Analysts may also wish to evaluate whether any concentrations exist, such as to certain industry niches or customer segments.
  • Portfolio Duration. Banks compete with other banks (and non-banks in some cases) on interest rate, loan structure, and underwriting requirements. Most banks will say they do not compete on underwriting requirements, such as offering higher loan/value ratios, which leaves rate and structure. To attract borrowers, banks may offer more favorable loan structures, such as longer-term fixed rate loans. Viewed in isolation, this exposes banks to greater interest rate risk; however, this loan structure may be entirely justified in light of the interest rate risk of the entire balance sheet.

Allowance for Loan & Lease Losses (“ALLL”)

Banks maintain reserves against loans that have defaulted or may default in the future. While a new regime for determining the ALLL will be implemented beginning for some banks in 2020, the size of the ALLL under current and future accounting standards generally varies between banks based on (a) portfolio size, (b) portfolio composition, as certain loan types inherently possess greater risk of credit loss, (c) the level of problem or impaired loans, and (d) management’s judgment as to an appropriate ALLL level. Calculating the ALLL necessarily includes some qualitative inputs, such as regarding the outlook for the economy and business conditions, and reasonable bankers can disagree about an appropriate ALLL level. Key analytical considerations regarding the ALLL and overall asset quality include:

  • ALLL Metrics. The ALLL – as a percentage of total loans, nonperforming loans, or loan charge-offs – can be benchmarked against the bank’s historical levels and peer averages. One shortcoming of the traditional ALLL methodology, which may or may not be remediated by the new ALLL methodology, is that reserves tend to be procyclical, meaning that reserves tend to decline leading into a recession (thereby enhancing earnings) but must be augmented during periods of economic stress when banks have less financial capacity to bolster reserves.
  • Charge-Off Metrics. The ALLL decreases by charge-offs on defaulted loans, while recoveries on previously defaulted loans serve to increase the ALLL. One of the most important financial ratios compares loan charge-offs, net of recoveries, to total loans. Deviations from the bank’s historical performance should be investigated. For example, are the losses concentrated in one type of lending or widespread across the portfolio? Is the change due to general economic conditions or idiosyncratic factors unique to the bank’s portfolio? Is a new lending product performing as expected?Charge-off ratios also provide insight into the amount of credit risk accepted by a bank, relative to its peer group. However, credit losses should not be viewed in isolation – yields matter as well. It is safe to assume, though, that higher than peer charge-offs, coupled with lower than peer loan yields, is a poor combination. While banks strive to avoid credit losses, a lengthy period marked by virtually nil credit losses could suggest that the bank’s underwriting is too restrictive, sacrificing earnings for pristine credit quality.
  • Loan Loss Provision. The loan loss provision increases the ALLL. A provision generally is necessary to offset periodic loan charge-offs, cover loan portfolio growth, and address risk migration as loans enter and exit impaired or nonperforming status.

Deposits

As for loans, bank financial statements distinguish several deposit types, such as demand deposits and CDs. It is useful to decompose deposits further into retail (local customers) and wholesale (institutional) deposits. Key analytical considerations include:

  • Portfolio Size. Deposit market share tends to shift relatively slowly; therefore, quickly raising substantial retail deposits is a difficult proposition. Banks with more rapid loan growth face this challenge acutely. Often these banks rely more significantly on rate sensitive deposits, such as CDs, or more costly wholesale funds. Therefore, analysts should consider the interaction between loan growth objectives and the availability and pricing of incremental deposits.
  • Composition. Investors generally prefer a high ratio of demand deposits, because these accounts usually possess the lowest interest rates, the lowest attrition rates and interest rate sensitivity, and the highest noninterest income. Of course, these accounts also are the most expensive to gather and service, requiring significant investments in branch facilities and personnel. With that said, other successful models exist. Some banks minimize operating costs, but offer higher interest rates to depositors.
  • Rate. Banks generally obtain rate surveys of their local market area, which provide insight into competitive conditions and the bank’s relative position. Also, it is useful to benchmark the bank’s cost of deposits against its peer group. Deposit portfolio composition plays a part in disparities between the subject bank and the peer group, as do regional differences in deposit competition.

Shareholders’ Equity and Regulatory Capital

Historical changes in equity cannot be understood without an equity roll-forward showing changes due to retained earnings, share sales and redemptions, dividends, and other factors. In our opinion, it is crucial to analyze the bank’s current equity position by reference to management’s business plan, as this will reveal amounts available for use proactively to generate shareholder returns (such as dividends, share repurchases, or acquisitions). Alternatively, the analysis may reveal the necessity of either augmenting equity through a stock offering or curtailing growth objectives.

The computation of regulatory capital metrics can be obtained from a bank’s regulatory filings. Relative to shareholders’ equity, regulatory capital calculations: (a) exclude most intangible assets and certain deferred tax assets, and (b) include certain types of preferred stock and debt, as well as the ALLL, up to certain limits.

Understanding the Income Statement

There are six primary components of the bank’s income statement:

  1. Net interest income, or the difference between the income generated by earning assets and the cost of funding.
  2. Noninterest income, which includes revenue from other services provided by the bank such as debit cards, trust accounts, or loans intended for sale in the secondary market. The sum of net interest income and noninterest income represents the bank’s total revenues.
  3. Noninterest expenses, which principally include employee compensation, occupancy costs, data processing fees, and the like. Income after noninterest expenses commonly is referred to by investors, but not by accountants, as “pre-tax, pre-provision operating income” (or “PPOI”).
  4. Loan loss provision
  5. Security gains and losses
  6. Taxes

Net Interest Income

The previous analysis of the balance sheet foreshadowed this net interest income discussion with one important omission – the external interest rate environment. While banks attempt to mitigate the effect on performance of uncontrollable factors like market interest rates, some influence is unavoidable. For example, steeper yield curves generally are more accommodative to net interest income, while banks struggle with flat or inverted yield curves.

Another critical financial metric is the net interest margin (“NIM”), measured as the yield on all earning assets minus the cost of funding those assets (or net interest income divided by earning assets). The NIM and net interest income are influenced by the following:

  • The earning asset mix (higher yielding loans, versus lower yielding securities)
  • Asset duration (longer duration earning assets usually receive higher yields)
  • Credit risk (accepting more credit risk should enhance asset yields and NIM)
  • Liability composition (retail versus wholesale deposits, or demand deposits versus CDs)
  • Liability duration (longer duration liabilities usually have higher interest rates)

Noninterest Income

The sensitivity of net interest income to uncontrollable forces – i.e., market interest rates – makes noninterest income attractive to bankers and investors. Banks generate noninterest income from a panoply of sources, including:

  • Fees on deposit accounts, such as service charges, overdraft income, and debit card interchange
  • Gains on the sale of loans, such as residential mortgage loans or government guaranteed small business loans
  • Trust and wealth management income
  • Insurance commissions on policies sold
  • Bank owned life insurance where the bank holds policies on employees

Some sources of revenue can be even more sensitive to the interest rate environment than net interest income, such as income from residential mortgage originations. Yet other sources have their own linkages to uncontrollable market factors, such as revenues from wealth management activities tied to the market value of account assets.

Expanding noninterest income is a holy grail in the banking industry, given limited capital requirements, revenue diversification benefits, and its ability to mitigate interest rate risk while avoiding credit risk. However, many banks’ fee income dreams have foundered on the rocks of reality for several reasons. First, achieving scale is difficult. Second, cross-sales of fee income products to banking customers are challenging. Third, significant cultural differences exist between, say, wealth management and banking operations. A fulsome financial analysis considers the opportunities, challenges, and risks presented by noninterest income.

Noninterest Expenses

In a mature business like banking, expense control always remains a priority.

  • Personnel expenses. Personnel expenses account for 50-60% of total expenses. Significant changes in personnel expenses generally are tied to expansion initiatives, such as adding branches or hiring a lending team from a competitor. Regulatory filings include each bank’s full-time equivalent employees, permitting productivity comparisons between banks.
  • Occupancy expenses. With the shift to digital delivery of banking services, occupancy expenses have remained relatively stable for many community banks, while larger banks have closed branches. Nevertheless, banks often conclude that entering a new market requires a beachhead in the form of a physical branch location.
  • Other expenses. Regulatory filings lump remaining expenses into an “other” category, although audited financial statements usually provide greater detail. More significant contributors to the “other” category include data processing and information technology spending, marketing costs, and regulatory assessments.

Loan Loss Provision

We covered this income statement component previously with respect to the ALLL.

Income Taxes

Banks generally report effective tax rates (or actual income tax expense divided by pre-tax income) below their marginal tax rates. This primarily reflects banks’ tax-exempt investments, such as municipal bonds; bank-owned life insurance income; and vehicles that provide for tax credits, like New Market Tax Credits. It is important to note that state tax regimes may differ for banks and non-banks. For example, some states assess taxes on deposits or equity, rather than income, and such taxes are not reported as income tax expense.

Return Decomposition

As the preceding discussion suggests, many levers exist to achieve shareholder returns. One bank can operate with lean expenses, but pay higher deposit interest rates (diminishing its NIM) and deemphasize noninterest income. Another bank may pursue a true retail banking model with low cost deposits and higher fee income, offset by the attendant operating costs. There is not necessarily a single correct strategy. Different market niches have divergent needs, and management teams have varying areas of expertise. However, we still can compare the returns on equity (or net income divided by shareholders’ equity) generated by different banks to assess their relative performance.

The figure below presents one way to decompose a bank’s return on equity relative to its peer group. This bank generates a higher return on equity than its peer group due to (a) a higher net interest margin, (b) a slightly lower loan loss provision, and (c) higher leverage (shown as the “equity multiplier” in the table).

Income Statement Metrics

The figure below cites several common income statement metrics used by investors, as well as their strengths and shortcomings.

Sources of Information

Banks file quarterly Call Reports, which are the launching pad for our templated financial analyses. Depending on asset size, bank holding companies file consolidated financial statements with the Federal Reserve. All bank holding companies, small and large, file parent company only financial statements, although the frequency differs. Other potentially relevant sources of information include:

  1. Audited financial statements and internal financial data
  2. Board packets, which often are sufficiently extensive to cover our information requirements
  3. Budgets, projections, and capital plans
  4. Asset quality reports, such as criticized loan listings, delinquency reports, concentration analyses, documentation regarding ALLL adequacy, and special asset reports for problem loans
  5. Interest rate risk scenario analyses and inventories of the securities portfolio
  6. Federal Reserve form FR Y-6 provides the composition of the holding company’s board of directors and significant shareholders’ ownership

Conclusion

A rigorous examination of the bank’s financial performance, both relative to its history and a relevant peer group and with due consideration of appropriate risk factors, provides a solid foundation for a valuation analysis. As we observed in June’s BankWatch, value is dependent upon a given bank’s growth opportunities and risk factors, both of which can be revealed using the techniques described in this article.

Given the variety of business models employed by banks, this article is inherently general. Some factors described herein will be more or less relevant (or even not relevant) to a specific bank, while it is quite possible that, for the sake of brevity, we altogether avoided mention of other factors relevant to a specific bank. Readers should therefore conduct their own analysis of a specific bank, taking into account its specific characteristics.


Originally published in Bank Watch, August 2019.

Key Valuation Considerations for FinTech Purchase Price Allocations

FinTech M&A continues to be top of mind for the sector as larger players seek to grow and expand while founders and early investors look to monetize their investments.  This theme was evident in several larger deals already announced in 2019 including Global Payments/Total System Services (TSYS), Fidelity National Information Services, Inc./Worldpay, Inc., and Fiserv, Inc./First Data Corporation.

One important aspect of FinTech M&A is the purchase price allocation and the valuation estimates for goodwill and intangible assets as many FinTech companies have minimal physical assets and a high proportion of the purchase price is accounted for via goodwill and intangible assets.  The majority of value creation for the acquirer and their shareholders will come from their investment in and future utilization of the intangibles of the FinTech target.  To illustrate this point, consider that the median amount of goodwill and intangible assets was ~98% of the transaction price for FinTech transactions announced in 2018.  Since such a large proportion of the transaction price paid for FinTech companies typically gets carried in the form of goodwill or intangibles on the acquirer’s balance sheet, the acquirer’s future earnings, tax expenses, and capitalization will often be impacted significantly from the depreciation and amortization expenses.

When preparing valuation estimates for a purchase price allocation for a FinTech company, one key step for acquirers is identifying the intangible assets that will need to be valued.  In our experience, the identifiable intangible assets for FinTech acquisitions often include the tradename, technology (both developed and in-development), noncompete agreements, and customer relationships.  Additionally, there may be a need to consider the value of an earn-out arrangement if a portion of transaction consideration is contingent on future performance as this may need to be recorded as a contingent liability.

Since the customer relationship intangible is often one of the more significant intangible assets to be recorded in FinTech acquisitions (both in $ amounts and as a % of the purchase price), we discuss how to value FinTech customer relationships in greater detail in the remainder of the article.

Valuing Customer-Related Assets

Firms devote significant human and financial resources in developing, maintaining and upgrading customer relationships. In some instances, customer contracts give rise to identifiable intangible assets. More broadly, however, customer-related intangible assets consist of the information gleaned from repeat transactions, with or without underlying contracts. Firms can and do lease, sell, buy or otherwise trade such information, which are generally organized as customer lists.

Since FinTech has some relatively varied niches including payments, digital lending, WealthTech, or InsurTech, the valuation of FinTech customer relationships can vary depending on the type of company and the niche that it operates in.  While we do not delve into the key attributes to consider for each FinTech niche, we provide one illustration from the Payments niche.

In the Payments industry, one key aspect to understand when evaluating customer relationships is where the company is in the payment loop and whether the company operates in a B2B (business-to-business) or B2C (business-to-consumer) model.  This will drive who the customer is and the economics related to valuing the cash flows from the customer relationships.  For example, merchant acquirers typically have contracts with the merchants themselves and the valuable customer relationship lies with the merchant and the dollar volume of transactions processed by the merchant over time, whereas the valuable relationship with other payments companies such as a prepaid or gift card company may lie with the end-user or consumer and their spending/card usage habits over time.

Valuation Approaches

Valuation involves three approaches: 1) the cost approach, 2) the market approach, and 3) the income approach. Customer relationships are typically valued based upon an income approach (i.e., a discounted cash flow method) where the cash flows that the customer relationships are expected to generate in the future are forecast and then discounted to the present at a market rate of return.

Cost Approach

Valuation under the cost approach requires estimation of the cost to replace the subject asset, as well as opportunity costs in the form of cash flows foregone as the replacement is sought or recreated. The cost approach may not be feasible when replacement or recreation periods are long. Therefore, the cost approach is used infrequently in valuing customer-related assets.

Market Approach

Use of the market approach in valuing customer-related assets is generally untenable for FinTech companies because transactional data on sufficiently comparable assets are not likely to be available.

Income Approach

Under the income approach, customer-related assets are valued most commonly using the income approach. One method within the income approach that is often used to value FinTech customer relationships is the Multi-Period Excess Earnings Method (MPEEM).  MPEEM involves the estimation of the cash flow stream attributable to a particular asset. The cash flow stream is discounted to the present to obtain an indication of fair value. The most common starting point in estimating future cash flows is the prospective financial information prepared by (or in close consultation with) the management of the subject business.  The key valuation inputs are often estimates of the economic benefit of the customer relationship (i.e., the cash flow stream attributable to the relationships), customer attrition rate, and the discount rate.  Three key attributes that are important when using these inputs to valuing customer relationships include:

  1. Repeat Patronage. The expectation of repeat patronage creates value for customer-related intangible assets. Contractual customer relationships formally codify the expectation of future transactions. Even in the absence of contracts, firms look to build on past interactions with customers to sell products and services in the future.
    Two aspects of repeat patronage are important in evaluating customer relationships. First, not all customer contact leads to an expectation of repeat patronage. The quality of interaction with walk-up retail customers, for instance, is generally considered inadequate to reliably lead to expectations of recurring business. Second, even in the presence of adequate information, not all expected repeat business may be attributable to customer-related intangible assets. Some firms operate in monopolistic or near-monopolistic industries where repeat patronage is directly attributable to a dearth of acceptable alternatives available to customers. In other cases, it may be more appropriate to attribute recurring business to the strength of the trade names, software platform, or brands.
  2. Attrition. Customer-related intangible assets create value over a finite period. Without efforts geared towards continual reinforcement, customer lists dwindle over time due to customer mortality, the ravages of competition, or the emergence of alternate products and services. The mechanics of present value mathematics further erode the economic benefits of sales to current customers in the distant future. Customer relationships are wasting assets whose economic value attrite with the passage of time.
  3. Other Assets.  Customer-related intangible assets depend on the existence of other assets to provide value to the firm. Most assets, including fixed assets and intellectual property, are essential in creating products or providing services. The act of selling these products and services enable firms to develop relationships and collect information from customers. In turn, the value of these relationships depends on the firms’ ability to sell additional products and services in the future. Consequently, for firms to extract value from customer-related assets, a number of other assets need to be in place.

Conclusion

Mercer Capital has experience providing valuation and advisory services to FinTech companies and their acquirers.  We have valued customer-related and other intangible assets to the satisfaction of clients and their auditors within the FinTech industry across a multitude of niches (payments, wealth management, insurance, lending, and software).  Most recently, we completed a purchase price allocation for a private equity firm that acquired a FinTech company in the Payments niche.  Please contact us to explore how we can help you.


Originally published in the Value Focus: FinTech Industry Newsletter, Mid Year 2019.

Business Valuations and Quality of Earnings in M&A Transactions

Karolina Calhoun, CPA/ABV/CFF presented “Business Valuations and Quality of Earnings in M&A Transactions” at the Association for Corporate Growth (ACG) Tennessee Chapter’s monthly meeting on August 22, 2019. In this presentation, Karolina provides an overview of valuation and quality of earnings as well as when to use each, how are the two different, and how can the two overlap.

Shareholder Value Drivers

Originally presented at the Consumer Bankers Association Executive Banking School at Furman University in Greenville, South Carolina, in this session, Jeff K. Davis, CFA addresses the following objectives:

  • Valuation Framework
  • Concept of Earning Power
  • Reconciling P/TBV and P/E
  • Intrinsic Value vs Franchise Value
  • How Institutional Investors View Value
  • Great Stock vs Great Company
  • Overview of the Market for Bank Stocks and Bank M&A

Featured Article


Featured Media


Featured Newsletter


Featured Product



Featured Whitepaper


Featured Event