Section 409A is a provision of the Internal Revenue Code that applies to all companies offering nonqualified deferred compensation plans to employees. Generally speaking, a deferred compensation plan is an arrangement whereby an employee (“service provider” in 409A parlance) receives compensation in a later tax year than that in which the compensation was earned. “Nonqualified” plans exclude 401(k) and other “qualified” plans.
What is interesting from a valuation perspective is that stock options and stock appreciation rights (SARs), two common forms of incentive compensation for private companies, are potentially within the scope of Section 409A. The IRS is concerned that stock options and SARs issued “in the money” are really just a form of deferred compensation, representing a shifting of current compensation to a future taxable year. So, in order to avoid being subject to 409A, employers (“service recipients”) need to demonstrate that all stock options and SARs are issued “at the money” (i.e., with the strike price equal to the fair market value of the underlying shares at the grant date). Stock options and SARs issued “out of the money” do not raise any particular problems with regard to Section 409A.
Stock options and SARs that fall under Section 409A create problems for both service recipients and service providers. Service recipients are responsible for normal withholding and reporting obligations with respect to amounts includible in the service provider’s gross income under Section 409A. Amounts includible in the service provider’s gross income are also subject to interest on prior underpayments and an additional income tax equal to 20% of the compensation required to be included in gross income. For the holder of a stock option, this can be particularly onerous as, absent exercise of the option and sale of the underlying stock, there has been no cash received with which to pay the taxes and interest.
These consequences make it critical that stock options and SARs qualify for the exemption under 409A available when the fair market value of the underlying stock does not exceed the strike price of the stock option or SAR at the grant date.
For public companies, it is easy to determine the fair market value of the underlying stock on the grant date. For private companies, fair market value cannot be simply looked up on Bloomberg. Accordingly, for such companies, the IRS regulations provide that “fair market value may be determined through the reasonable application of a reasonable valuation method.” In an attempt to clarify this clarification, the regulations proceed to state that if a method is applied reasonably and consistently, such valuations will be presumed to represent fair market value, unless shown to be grossly unreasonable. Consistency in application is assessed by reference to the valuation methods used to determine fair market value for other forms of equity-based compensation. An independent appraisal will be presumed reasonable if “the appraisal satisfies the requirements of the Code with respect to the valuation of stock held in an employee stock ownership plan.”
A reasonable valuation method is to consider the following factors:
The value of tangible and intangible assets
The present value of future cash flows
The market value of comparable businesses (both public and private)
Other relevant factors such as control premiums or discounts for lack of marketability
Whether the valuation method is used consistently for other corporate purposes
In other words, a reasonable valuation considers the cost, income, and market approaches, and considers the specific control and liquidity characteristics of the subject interest. For start-up companies, the valuation would also consider the company’s most recent financing round and the rights and preferences of any securities issued. The IRS is also concerned that the valuation of common stock for purposes of Section 409A be consistent with valuations performed for other purposes.
Fair market value is not specifically defined in Section 409A of the Code or the associated regulations. Accordingly, we look to IRS Revenue Ruling 59-60, which defines fair market value as “the price at which the property would change hands between a willing buyer and a willing seller when the former is not under any compulsion to buy and the latter is not under any compulsion to sell, both parties having reasonable knowledge of relevant facts.”
Among the general valuation factors to be considered under a reasonable valuation method are “control premiums or discounts for lack of marketability.” In other words, if the underlying stock is illiquid, the stock should presumably be valued on a non-marketable minority interest basis.
This is not without potential confusion, however. In an Employee Stock Ownership Plan (ESOP), stock issued to participants is generally covered by a put right with respect to either the Company or the ESOP. Accordingly, valuation specialists often apply marketability discounts on the order of 0% to 10% to ESOP shares. Shares issued pursuant to a stock option plan may not have similar put rights attached, and therefore may warrant a larger marketability discount. In such cases, a company that has an annual ESOP appraisal may not have an appropriate indication of fair market value for purposes of Section 409A.
In addition to independent appraisals, formula prices may, under certain circumstances, be presumed to represent fair market value. Specifically, the formula cannot be unique to the subject stock option or SAR, but must be used for all transactions in which the issuing company buys or sells stock.
For purposes of Section 409A compliance, start-ups are defined as companies that have been in business for less than ten years, do not have publicly traded equity securities, and for which no change of control event or public offering is reasonably anticipated to occur in the next twelve months. For start-up companies, a valuation will be presumed reasonable if “made reasonably and in good faith and evidenced by a written report that takes into account the relevant factors prescribed for valuations generally under these regulations.” Further, such a valuation must be performed by someone with “significant knowledge and experience or training in performing similar valuations.”
This presumption, while presented as a separate alternative, strikes us a substantively and practically similar to the independent appraisal presumption described previously. Some commentators have suggested that the valuation of a start-up company may be performed by an employee or board member of the issuing company. We suspect that it is the rare employee or board member that is actually qualified to render the described valuation.
The bottom line is that Section 409A applies to both start-ups and mature companies.
The safe harbor presumptions of Section 409A apply only when the valuation is based upon an independent appraisal, and it is likely that a valuation prepared by an employee or board member would raise questions of independence and objectivity.
The regulations also clarify that the experience of the individual performing the valuation generally means at least five years of relevant experience in business valuation or appraisal, financial accounting, investment banking, private equity, secured lending, or other comparable experience in the line of business or industry in which the service recipient operates.
In our reading of the rules, this means that the appraisal should be prepared by an individual or firm that has a thorough educational background in finance and valuation, has accrued significant professional experience preparing independent appraisals, and has received formal recognition of his or her expertise in the form of one or more professional credentials (ASA, ABV, CBA, or CFA). The valuation professionals at Mercer Capital have the depth of knowledge and breadth of experience necessary to help you navigate the potentially perilous path of Section 409A.
Originally published in the Financial Reporting Update: Equity Compensation, June 2019.
Clients frequently want to know, “How long is an equity compensation valuation good for?” We get it. You want to provide employees, contractors, and other service providers who are compensated through company stock with current information about their interests, but the time and cost required to get a valuation must also be considered.
Due to the natural business changes every company goes through, accounting and legal professionals often recommend updates at least annually if no significant change or financing has occurred. However, unique company or market characteristics often necessitate more frequent updates. Here are some of the factors to consider when determining the need for a valuation update:
Even for companies that have fairly steady operations, the effects of small business changes accumulate over time. Companies that deal with major changes relatively infrequently may be suited to regular summary updates to supplement full comprehensive reports as a way to maximize the cost-benefit analysis of equity compensation valuation.
Originally published in the Financial Reporting Update: Equity Compensation, June 2019.
Executives expend a great deal of effort to determine the optimal way to finance the operations of their businesses. This may involve bringing on outside investors, employing bank debt, or financing through cash flow. Once the money has hit the bank, they may wonder, what effect does the capitalization of my company have on the value of its equity?
A company with a simple capital structure typically has been financed through the issuance of one class of stock (usually common stock). Companies with complex capital structures, on the other hand, may include other instruments: multiple classes of stock, forms of convertible debt, options, and warrants. This is frequent in startup or venture-backed companies that receive financing through multiple channels or fundraising rounds and private equity sources.
With various types of stock on the cap table, it is important to note that all stock classes are not the same. Each class holds certain rights, preferences, and priorities of return that can confer a portion of enterprise value to the shares besides their pro rata allocation. These often come in two categories: economic rights and control rights. Economic rights bestow financial benefits while control rights grant benefits related to operations and decision making.
The value of a certain class of stock is affected both by the rights and preferences it holds as well as those held by the other share classes on the cap table. The presence of multiple preferred classes also brings up the issue of seniority as certain class privileges may be overruled by those of a more senior share class.
Complex capital structures require complex valuation models that can integrate and prioritize the special treatments of individual share classes in multi-class cap tables. As such, models such as the PWERM or OPM are better-suited for these types of circumstances.
Originally published in the Financial Reporting Update: Equity Compensation, June 2019.
When an exit event is not imminent, the appropriate models to measure the fair value of a company with a complex capital stack are the Probability Weighted Expected Return Method (PWERM), the Option Pricing Method (OPM), or some combination of the two. While the choice of the model(s) is often dictated by facts and circumstances – for example, the company’s stage of development, visibility into exit avenues, etc. – using either the PWERM or the OPM requires a number of key assumptions that may be difficult to source or support for pre-public, often pre-profitable, companies. In this context, primary or secondary transactions involving the company’s equity instruments, which may or may not be identical to common shares, can be useful in measuring fair value or evaluating overall reasonableness of valuation conclusions.
For companies granting equity-based compensation, transactions are likely to take the form of either issuances of preferred shares as part of fundraising rounds or secondary transactions of equity instruments (preferred or common shares, as part of a fundraising round or on a standalone basis). Fundraising rounds usually do not provide pricing indications for common shares (or options on common) directly. However, a backsolve exercise that calibrates the PWERM and/or the OPM to the price of the new-issue preferred shares can provide value indications for the entire enterprise and common shares. While standalone secondary transactions may involve common shares, facts and circumstances around those transactions may determine the usefulness of related pricing information for any calibration or reconciliation exercise. Calibration, when viable, provides not only comfort around the overall soundness of valuation models and assumptions, but also a platform on which future value measurements can be based.
This article presents a brief discussion on evaluating observed or prospective transactions. Not all transactions are created equal – a fair value analysis should consider the facts and circumstances around the transactions to assess whether (and the degree to which) they are useful and relevant, or not.1
ASC 718 Compensation-Stock Compensation defines fair value as “the amount at which an asset (or liability) could be bought (or incurred) or sold (or settled) in a current transaction between willing parties, that is, other than in a forced or liquidation sale.” ASC 820 Fair Value Measurement defines fair value as “the price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date.” While some of the finer nuances may differ slightly, both definitions make reference to the concepts of i) willing and informed buyers and sellers, and ii) orderly transactions.
Notably, ASC 820 includes the directive that “valuation techniques used to measure fair value shall maximize the use of relevant observable inputsÉand minimize the use of unobservable inputs.” We take this to mean that pricing information from transactions should be used in the measurement (valuation) process as long as they are relevant from a fair value perspective.
A fundraising round involving new investors, assuming the company is not in financial distress, tends to involve negotiations between sophisticated buyers (investors) and informed sellers (issuing companies). As such, these transactions are relevant in measuring the fair value of equity instruments, including those granted as compensation.
When a fundraising round does not involve new investors, the parties to the transaction are not necessarily independent of each other. However, such a round may still be relevant from a fair value perspective if pricing resulted from robust negotiations or was otherwise reflective of market pricing.
As they give rise to observable inputs, secondary transactions can be relevant in the measurement process if the pricing information is reflective of fair value. Pricing from transactions in an active market for an identical equity instrument would generally reflect fair value. In other cases, orderly transactions Ð those that have received adequate exposure to the appropriate market, allowed sufficient marketing activities, and were not forced or distressed Ð can give rise to transaction prices that are reconcilable with fair value. Orderly secondary transactions that are relatively larger and those that involve equity instruments similar to the subject interests are more relevant.
Some fundraising rounds involve strategic investors who may receive economic benefits beyond just the ownership interest in the company. The strategic benefits could be codified in explicit contracts like a licensing arrangement. Consideration paid for equity interests acquired in such transactions may exceed the price a market participant (with no strategic interests) would consider reasonable. However, even as the pricing indication from such a transaction may not be directly relevant, it can be a useful reference or benchmark in measuring fair value. For example, it may be possible to estimate the excess economic benefits accruing to the strategic investors. Any fair value indication obtained separately could then be compared and reconciled to the price from the strategic fundraising rounds.
In other instances, strategic rounds may result in the company and investors sharing equally in the excess economic benefits. The transaction price could then be reflective of fair value, and a backsolve analysis to calibrate to the transaction price would be viable.
A tranched preferred investment may segment the purchase of equity interests into multiple installments. Pricing for such a round is usually set before the transaction and is identical across the installments, but future cash infusions may be contingent on specified milestones. Value of a company usually increases upon achieving technical, regulatory, or financial milestones. Even when future installments are not contingent on specified milestones, value may increase over time as the company makes progress on its business plan. Pricing set before the first installment tends to reflect a premium to the value of the company at the initial transaction date as it likely includes some expectation of potential economic upside from future installments. On the other hand, the same price may reflect a discount from the value of the company at future installment dates as the investments are (only) made once the economic upside is realized. Accordingly, a reconciliation to pricing information from these fundraising rounds may require separate estimates of the expectation of future upside (for the initial transaction date) and future values implied by the initial terms of the transaction (for later installment dates).
Some fundraising rounds involve purchases of a mix of equity instruments across the capital stack (i.e. different vintages of preferred and/or common) for the same or similar stated price per share. Usually, common shares involved in mixed purchases represent secondary transactions. From a fair value perspective, the transaction could be relevant in the aggregate and provide a basis to discern prices for each class of equity involved (considering the differences in rights and preferences among the classes). In other instances, either the company or the investor may have entered into a transaction for additional strategic benefits beyond just the economics reflected in the share prices. Depending on whether the buyer or the seller expects the additional strategic benefits, reported pricing may exceed the fair value of common shares or understate the value of the preferred shares. In yet other instances, mixed purchases at the same or similar prices may indicate a high likelihood of an initial public offering (IPO) in the near future. Typically, preferred shares convert into common at IPO and only one class of share exists subsequently.
Perhaps obviously, for both secondary and primary transactions, more proximate pricing indications are generally more directly useful for fair value measurement. Older, orderly transactions involving willing and informed parties would have been reflective of fair value at the time they occurred. If a more recent pricing observation is not available, current value indications could still be reconciled with the older transactions by considering changes at the company (and general market conditions) since the transaction date.
Planned future fundraising rounds could also provide useful information. In addition to the factors already addressed, a fair value analysis at the measurement date would need to consider the risk around the closing of the transaction.
Besides the usual transactions, other events that occur subsequent to the measurement date could still have a bearing on fair value. Future events that were known or knowable to market participants at the valuation date should be considered in measuring fair value. Events that were not known or knowable, but were still quite significant, may require separate disclosures.
An example of a special event on the horizon is an impending IPO. An IPO is usually a complex process that is executed over a relatively long period. At various points during the process, the company’s board or management, or the underwriter (investment banker) may project or estimate the IPO price. These estimates may change frequently or significantly until the actual IPO price is finalized. Even the actual IPO price may be subject to specific supply and demand conditions in the market at or near the date of final pricing. Subsequent trading often occurs at prices that vary (sometimes drastically) from the IPO price. For these reasons, estimates or actual IPO prices are unlikely to be reflective of fair value for pre-IPO companies.
Setting aside the uncertainties and idiosyncrasies around the process, an IPO provides ready liquidity for investors and access to public capital markets for the company. The act of going public ameliorates the risks associated with the lack of marketability of investments in a company. Easier access to public markets generally lowers the cost of capital, which would engender higher enterprise values. Accordingly, fair value of a minority equity interest prior to an IPO is generally perceived to be meaningfully different from (estimates of) the IPO price.
Incorporating information from observed or prospective transactions can help calibrate the PWERM or the OPM (or other valuation methods), along with the underlying assumptions. However, a valuation analysis should evaluate the transactions to assess whether they are relevant. Even when they are not directly relevant, transactions can help gauge the reasonableness of valuation conclusions.
Valuation specialists are fond of thinking their craft involves a blend of technique and judgment. The specific mechanics of models and methods, and related computations, represent the technical aspect. There is certainly some judgment involved in developing or selecting the assumptions that feed into the models. Judgment plays a bigger role, perhaps, in weaving together the models, assumptions, valuation conclusions, and other facts and circumstances, including transactions, into a coherent and compelling narrative.
Contact Mercer Capital with your valuation needs. We combine technical knowledge and judgment developed over decades of practice to serve our clients.
1 The discussion presented in this article is a summary of our reading of the relevant sections in the following:
Valuation of Privately-Held-Company Equity Securities Issued as Compensation, AICPA Accounting & Valuation Guide, 2013
Valuation of Portfolio Company Investments of Venture Capital and Private Equity Funds and Other Investment Companies, Working Draft of AICPA Accounting & Valuation Guide, 2018
Originally published in the Financial Reporting Update: Equity Compensation, June 2019.
Equity-based compensation has been a key part of compensation plans for years. When the equity compensation involves a publicly traded company, the current value of the stock is known and so the valuation of share-based payments is relatively straightforward. However, for private companies, the valuation of the enterprise and associated share-based compensation can be quite complex.
The AICPA Accounting & Valuation Guide, Valuation of Privately-Held-Company Equity Securities Issued as Compensation, describes four criteria that should be considered when selecting a method for valuing equity securities:
With these considerations in mind, let’s take a closer look at the four most common methods used to value private company equity securities.
The Current Value Method estimates the total equity value of the company on a controlling basis (assuming an immediate sale) and subtracts the value of the preferred classes based on their liquidation preferences or conversion values. The residual is then allocated to common shareholders. Because the CVM is concerned only with the value of the company on the valuation date, assumptions about future exit events and their timing are not needed. The advantage of this method is that it is easy to implement and does not require a significant number of assumptions or complex modeling.
However, because the CVM is not forward looking and does not consider the option-like payoffs of the share classes, its use is generally limited to two circumstances. First, the CVM could be employed when a liquidity event is imminent (such as a dissolution or an acquisition). The second situation might be when an early-stage company has made no material progress on its business plan, has had no significant common equity value created above the liquidation preference of the preferred shares, and for which no reasonable basis exists to estimate the amount or timing of when such value might be created in the future.
Generally speaking, once a company has raised an arm’s-length financing round (such as venture capital financing), the CVM is no longer an appropriate method.
The Probability-Weighted Expected Return Method is a multi-step process in which value is estimated based on the probability-weighted present value of various future outcomes. First, the valuation specialist works with management to determine the range of potential future outcomes for the company, such as IPO, sale, dissolution, or continued operation until a later exit date. Next, future equity value under each scenario is estimated and allocated to each share class. Each outcome and its related share values are then weighted based on the probability of the outcome occurring. The value for each share class is discounted back to the valuation date using an appropriate discount rate and divided by the number of shares outstanding in the respective class.
The primary benefit of the PWERM is its ability to directly consider the various terms of shareholder agreements, rights of each class, and the timing when those rights will be exercised. The method allows the valuation specialist to make specific assumptions about the range, timing, and outcomes from specific future events, such as higher or lower values for a strategic sale versus an IPO. The PWERM is most appropriate to use when the period of time between the valuation date and a potential liquidity event is expected to be short.
Of course, the PWERM also has limitations. PWERM models can be difficult to implement because they require detailed assumptions about future exit events and cash flows. Such assumptions may be difficult to support objectively. Further, because it considers only a specific set of outcomes (rather than a full distribution of possible outcomes), the PWERM may not be appropriate for valuing option-like payoffs like profit interests or warrants. In certain cases, analysts may also need to consider interim cash flows or the impact of future rounds of financing.
The Option Pricing Model treats each class of shares as call options on the total equity value of the company, with exercise prices based on the liquidation preferences of the preferred stock. Under this method, common shares would have material value only to the extent that residual equity value remains after satisfaction of the preferred stock’s liquidation preference at the time of a liquidity event. The OPM typically uses the Black-Scholes Option Pricing Model to price the various call options.
In contrast to the PWERM, the OPM begins with the current total equity value of the company and estimates the future distribution of outcomes using a lognormal distribution around that current value. This means that two of the critical inputs to the OPM are the current value of the firm and a volatility assumption. Current value of the firm might be estimated with a discounted cash flow method or market methods (for later-stage firms) or inferred from a recent financing transaction using the backsolve method (for early-stage firms). The volatility assumption is usually based upon the observed volatilities of comparable public companies, with potential adjustment for the subject entity’s financial leverage.
The OPM is most appropriate for situations in which specific future liquidity events are difficult to forecast. It can accommodate various terms of stockholder agreements that affect the distributions to each class of equity upon a liquidity event, such as conversion ratios, cash allocations, and dividend policy. Further, the OPM considers these factors as of the future liquidity date, rather than as of the valuation date.
The primary limitations of the OPM are its assumption that future outcomes can be modeled using a lognormal distribution and its reliance on (and sensitivity to) key assumptions like assumed volatility. The OPM also does not explicitly allow for dilution caused by additional financings or the issuance of options or warrants. The OPM can only consider a single liquidity event. As such, the method does not readily accommodate the right or ability of preferred shareholders to early-exercise (which would limit the upside for common shareholders). The potential for early-exercise might be better captured with a lattice or simulation model. For an in-depth discussion on the OPM, see our whitepaper A Layperson’s Guide to the Option Pricing Model at mer.cr/2azLnB.
The Hybrid Method is a combination of the PWERM and the OPM. It uses probability-weighted scenarios, but with an OPM to allocate value in one or more of the scenarios.
The Hybrid Method might be employed when a company has visibility regarding a particular exit path (such as a strategic sale) but uncertainties remain if that scenario falls through. In this case, a PWERM might be used to estimate the value of the shares under the strategic sale scenario, along with a probability assumption that the sale goes through. For the scenario in which the transaction does not happen, an OPM would be used to estimate the value of the shares assuming a more uncertain liquidity event at some point in the future.
The primary advantage of the Hybrid Method is that it allows for consideration of discrete future liquidity scenarios while also capturing the option-like payoffs of the various share classes. However, this method typically requires a large number of assumptions and can be difficult to implement in practice.
The methods for valuing private company equity-based compensation range from simplistic (like the CVM) to complex (like the Hybrid Method). In addition to the factors discussed above, the facts and circumstances of a particular company’s stage of development and capital structure can influence the complexity of the valuation method selected. In certain instances, a recent financing round or secondary sale of stock becomes a datapoint that needs to be reconciled to the current valuation analysis and may even prove to be indicative of the value for a particular security in the capital stack (see “Calibrating or Reconciling Valuation Models to Transactions in a Company’s Equity” on page 6). At Mercer Capital, we recommend a conversation early in the process between company management, the company’s auditors, and the valuation specialist to discuss these issues and select an appropriate methodology.
Originally published in the Financial Reporting Update: Equity Compensation, June 2019.
To the lay person, transportation may seem like the farthest end of the spectrum from the technology industry – telephone orders and paper shipment tracking. But those in the know understand just how tech-enabled the industry has become. Advancements in machine learning, artificial intelligence, and predictive technology could have the power to disrupt the way goods are transported, stored, and tracked. And investors are clearly willing to take bets on that.
Over the past few years, FreightTech has emerged as its own category of technology. The level of excitement in the space grew in 2018 as global venture capital investment increased to $2.9 billion from $1.3 billion the prior year. FreightTech is on track for another year of exponential growth in 2019, with $1.6B of funding raised in the first quarter alone.
The willingness of industry participants to adopt logistics technology is evident as well. Corporate players and major OEMs have spun up innovation departments, startup accelerators, and investment arms in order to find and fund new technology. However, it’s not only the companies that directly benefit from this technology that are investing capital in the space. Technology players recognize the potential for returns on transportation investments, too. Alphabet’s venture capital arm, Capital G, led a $185 million investment in Convoy, a tech-enabled freight matching startup, at the end of 2018. The Series C round valued Convoy at $1.0 billion and brought the company’s total capital raise to $265 million. Softbank Vision Fund, known for making big bets on disruptive technology, got in on the game too. The fund invested $1.0 billion in Flexport, a digital platform for freight forwarding and logistics, at the start of the year. The investment valued the company at $3.2 billion.
The table below shows the five largest North American FreightTech investments in the first quarter of 2019 by round size.
Investment in FreightTech has not only grown in terms of aggregate investment, but the average size of deal rounds has increased as well, mirroring the trends in the overall venture capital landscape. According to Morningstar, the average round size for a Series B round in the FreightTech industry increased 78% from $24.5 million in 2014 to $43.6 million in 2017.
The classification of transportation and logistics startups differs, but it is clear that there is growing innovation in many different facets of the industry. It is evident that technological change in the freight transportation industry is about far more than just digitizing processes that once involved paper or fax machines. The application of advanced data and analytics to the transportation and logistics industry has the potential to change the global movement of freight.
Originally published in the Value Focus: Transportation & Logistics, First Quarter 2019.
I recently attended the 2019 Spring Conference of the National Auto Dealers Counsel (NADC) in Dana Point, California. This article provides a couple of key takeaways from the day and a half sessions on the current conditions in the industry.
Car subscription services are becoming a popular alternative to leasing. Each service varies in structure and is operated by dealers, manufacturers, and third parties. Some offer reasonable traditional leases or allow customers to make monthly payments, but allow more flexibility/frequency in swapping vehicles for changing preferences and needs.
Some manufacturers are only initially offering subscription services regionally, or in specific markets (BMW and Mercedes-Benz are offering vehicle subscription services in the Nashville market).
There has been a lot of talk in the news recently about impending tariffs in the auto dealer industry. Many unknowns and questions remain—Will President Trump enact tariffs? How will they affect the auto industry?
The Center for Automotive Research Report has compiled statistics to show the likely effects of tariffs on new/ used vehicle pricing, estimated losses for dealers, and projected employment and GDP loss (as seen below). With so much at stake, the auto dealer industry will keep a close eye on monitoring any new developments.
Amid the many changes that have resulted from the recent tax reform (the Tax Cuts and Jobs Act (TCJA)), here are a few directly impacting the auto dealer industry:
Originally published in the Value Focus: Auto Dealer Industry Newsletter, Year-End 2018.
This article explains dealership metrics and performance statistics–what they mean, how to evaluate them, and where a particular store stacks up. As always, performance measures are relative. We are relying upon averages provided by NADA as well as our experience working with auto dealers.1
A few key terms help frame our discussion:
Specifically, we are relying upon information from the average dealership profile for 2017 and 2018 from NADA.2
For the average dealership profile, our experience has been that this department comprises between 50% – 60% (58% for 2017-2018 per NADA) of total gross sales. The front-end gross margin on new vehicles can vary over time and is somewhat controlled by the manufacturer. Typically, dealerships track and measure front-end gross margin on a per unit basis and can evaluate the overall performance of that figure by comparing it to prior years. Most domestic, import, or luxury dealerships experience a lower front-end gross margin on new vehicles than on used vehicles. Conversely, most high-line dealerships experience a higher front-end gross margin on new vehicles than on used vehicles.
New vehicles generally have a higher average retail selling price, lower front-end gross margins, and sell fewer units than used vehicles. These factors result in new vehicles comprising approximately 25% of total overall gross profits for an average dealership.
For the average dealership profile, our experience has been that this department comprises between 25% – 40% of total gross sales. These percentages can vary depending on franchise/dealership type and regional location. Like new vehicles, dealerships also track frontend gross profits on used vehicles on a per unit basis. Most domestic, import, or luxury dealerships experience a higher front-end gross margin on used vehicles than on new vehicles.
The sale of used vehicles should not be overlooked when assessing the value of a dealership. More often than not front-end gross margins on used vehicles will be higher than new vehicles. Additionally, the sale of both new and used vehicles put more cars in service and help drive profitability to fixed operations (to be discussed in next section). Based on our experience valuing new car dealerships, the range of used retail vehicles sold to new retail vehicles sold is 1.00 to 1.25. This figure can vary by dealership and can also be quite cyclical throughout the year. Further, our experience shows this ratio can climb to 1.5 to 1.6 when considering dealerships with successful wholesale used vehicle sales.
Used vehicles generally have a lower average retail selling price, higher front-end gross margins, and sell more units than new vehicles. These factors result in used vehicles comprising approximately 25% of total overall gross profits for an average dealership, or about even with the total overall gross profit contribution from new vehicles.
The long-term success of a dealership’s fixed operations is often tied to their effectiveness in selling new and used vehicles over time. These activities help to build brand in a market. Another critical factor in the success and level of profitability in the fixed operations is the auto industry cycle. In our last issue, we discussed the cyclicality of the industry not only in terms of certain months during the year, but also year-over-year.
Two such indicators of the auto industry life cycle are the SAAR and the average age of car. As shown on page 14 of the newsletter, the monthly SAAR began to level off in late 2018 and into the first few months of 2019 (despite a slight spike in March 2019) evidencing slower new light vehicle sales. Additionally, per our previous newsletter, the average age of cars in service was approximately ten years.
Both factors foreshadow that fixed operations of successful dealerships should experience an uptick in the short-term and mitigate the moderate/sluggish new vehicle sales. When customers hold onto their cars longer, they are less likely to spend money on a new or used vehicle, but their maintenance needs on their current vehicle will likely increase.
For the average dealership profile, our experience suggests that the service department comprises between 10% – 15% of total gross sales. However, this department is typically the most profitable in terms of a percentage of sales. The combination of much higher margins on lower sales results in the service department averaging 45% – 50% of total gross profits, or a much higher contribution level than new or used vehicles.
All dealerships are not created equally. This article is a general discussion on various dealership metrics and performance statistics. Each statistic is relative and not to be viewed in a vacuum. Hopefully, we have provided a better understanding of the various departments, including fixed vs. variable operations and their contribution to overall profitability and the eventual value of a store. A graphic display of historical profitability and other metrics are discussed later in the newsletter. For an understanding of how your dealership is performing along with an indication of what your store is worth, contact us. We are happy to discuss your needs in confidence.
1 The data and discussion are based generally on average dealership profiles and do not pertain specifically to domestic dealerships, import dealerships, ultra high-line dealerships, etc. Specific types of dealerships and their regional location could have different performance metrics and criteria.
2 It’s important to note that other national sources of Blue Sky multiple data (Haig Partners and Kerrigan Advisors) classify the categories of dealerships slightly different from NADA, so all comparisons and discussion should be done in general terms.
Originally published in the Value Focus: Auto Dealer Industry Newsletter, Year-End 2018.
Originally presented at the 2019 AAML/BVR National Divorce Conference in Las Vegas, in this session, Z. Christopher Mercer, FASA, CFA, ABAR delves into more than 30 years of experience presenting complex valuation and damages issues to judges and juries. One of the key ideas of effective communication is the KISS principle, or “keep it simple, stupid.” The question is, how can we do that? Chris provides the techniques and templates necessary to communicate your position, and your opponent’s position, in such a way that judges can hone in on and understand the most important information and why it’s important.
The trucking industry is wedged between a rock and a hard place when it comes to driver recruitment. Trucking companies are simultaneously exploring self-driving technology, while still convincing new entrants to the labor market that commercial driving is a career choice that will pay off. Punctuating the less-than-glamorous work and lifestyle conditions of the occupation, those entering the labor force realize that the career path could be upended in the near-term by the economic cycle and disrupted in the long term by the impending evolution of autonomous transportation. With several companies (like Tesla) beginning deployment of self-driving trucks, and numerous others deep in development of the technology, young workers may fear choosing a vocation that trucking companies are actively planning to automate.
Rob Sandlin, CEO of Patriot Transportation, emphasized these challenges in the company’s third quarter earnings call, “Management spends a good deal of time dealing with these issues surrounding driver shortage, including advertising, recruiting, compensation, dispatcher training and productivity among others.” With the tightening of the labor market, companies have found new ways to attract talent including investments in newer and more reliable assets, in-house training programs, incentive bonuses, and, of course, a simple increase in wages.
Executives at many of the largest trucking companies dedicated time in their third quarter investor calls and presentations to this issue. PAM identified several unique recruitment initiatives in its November corporate presentation. The company is taking advantage of temporary visitor qualifications through the B1 Visa program to increase labor capacity. This program allows commercial drivers with Mexican residence temporary entry to the United States for truck delivery. Additionally, the company’s new driver-friendly initiatives promote lifestyle and career improvements. Its “Driver Life-Cycle” program provides dedicated driver experience with a path towards ownership through a lease-to-own set up.
Patriot mentioned significant changes to its recruitment efforts, as well. “In the latter part of fiscal 2018, we implemented a significant change to our hiring process, we added [a] driver advocate position and introduced productivity-based driver pay, all in an effort to attract and retain drivers. We are encouraged by the increased number of drivers hired and in training since these implementations, and we’ll continue to monitor our progress for any needed adjustments to our plan.”
The driver shortage (which is estimated to reach 108,000 by 2026) has sparked major shifts in the way hiring and training are conducted in the industry. While this shortage will hurt shippers until autonomous technology is fully developed, the long-term problem may actually lie in another labor pool: service technicians.
As new truck designs increase the level of technology on board, those who service them will have to develop more tech-focused expertise. Additional sensors, predictive technology, and, of course, autonomy will evolve the role of the truck mechanic as they start spending more time with computers than wrenches. While technical colleges and certificate programs continue to produce a skilled workforce, the supply of service technicians has not kept pace with the increasing demand.
Trucking companies have had to adapt to the shifting labor force trends and find new ways to fulfill maintenance needs. Like driver scarcity, mechanic shortages have caused companies to seek alternatives to traditional labor sourcing, from outsourcing labor needs to developing training programs.
Overall, employment in the transportation and warehousing industry grew 3.5% from October 2017 to October 2018, adding more than 183,700 jobs. Nearly 37,000 of these jobs were in the trucking industry, which experienced a 2.5% increase in employment over the prior year. The transportation industries are adding jobs faster than the overall non-farm economy, which experienced a more modest 1.7% increase in employment.
Despite labor pressures in the industry, economic activity and transportation demand remain strong. While executives will continue to monitor driver and mechanic shortages, the outlook for trucking in 2019 appears optimistic. John Roberts III, CEO of J.B. Hunt, summed up the industry sentiment well on the company’s third quarter earnings call.
Just final comment on the people side of things. Driver hiring has been a challenge. It’s been a challenge in the past. It presented us with the challenge like we have never seen before this year. In fact, our unseated need number got as high as [it’s] ever been. In about the last 60 days, we’ve seen that number come down a little bit through a number of internal efforts. And I think overall pay in the industry is starting to catch up a little bit. And so I think more people are becoming interested. But we’re making progress there and feel confident we’ll continue to get through that. Good year, some challenges, and frankly, we’re looking forward to heading into 2019.
Originally published in the Value Focus: Transportation & Logistics, Fourth Quarter 2018.
Since Bank Watch’s last review of net interest margin (“NIM”) trends in July 2016, the Federal Open Market Committee has raised the federal funds rate eight times after what was then the first rate hike (December 2015) since mid2006. With the past two years of rate hikes and current pause in Fed actions, it’s a good vantage point to look at the effect of interest rate movements on the NIM of small and large community banks (defined as banks with $100 million to $1 billion of assets and $1 billion to $10 billion of assets).
As shown in Figure 1, NIMs crashed in the immediate aftermath of the financial crisis, primarily because asset yields fell much quicker than banks could reprice term deposits. NIMs subsequently rebounded as the asset refinancing wave subsided while banks were able to lower deposit rates. A several year period then occurred in which asset yields grinded lower at a time when deposit rates could not be reduced. This period was particularly tough for commercial banks with a high level of non-interest bearing deposits.
Since rate hikes started, the NIM for both small and large community banks have increased about 20bps through year-end 2018 before experiencing some pressure in early 2019. The nine hikes by the Fed to a target funds rate of 2.25% to 2.50% amounts to a 225bps increase.
At first pass, the expansion in the NIMs is less than might be expected; however, there are always a number of factors in bank balance sheets that will impact the NIM, including:
Recent incremental pressure on NIMs notwithstanding, community banks’ balance sheets were poised to take advantage of rising rates the past several years. The outperformance of bank stocks beginning in November 2016 reflected several factors, including an economic and regulatory backdrop that would allow the Fed to raise rates further and faster, and thereby support NIM expansion.
The underperformance of bank stocks since last fall reflects investor concern that this tailwind is ending in addition to more general concerns about what a possible economic slowdown implies for credit costs. Telltale signs include the inversion of the Treasury yield curve and yields on the two-year and five-year Treasuries that, as of the date of the drafting of this article, are below the low-end of the Fed Funds target range.
Also, the spot and forward curves for 30-day Libor imply the Fed will cut the Funds target rate and other short-term policy rates one or two times by early 2020 (or stated differently, the December rate hike was a mistake).
The Federal Funds rate, the predominant influence on short-term interest rates, has remained unchanged since year-end 2018 at a target range of 2.25%–2.50% due to concerns about lower inflation figures and what they may forewarn about future economic growth as reflected in falling U.S. Treasury yields. The FOMC reiterated its wait-and-see approach on May 1. However, the sand appears to be shifting beneath the Fed’s feet.
The Wall Street Journal’s most recent Economic Forecasting Survey revealed an increasing belief that the Fed’s next move will be to cut rates. 51% of respondents said that a rate cut would be the next move, up from 44% in April. 25.5% replied that the next rate raise would occur in 2020 or later. Fed officials have maintained their stance that a rate move in either direction will not occur soon.
As deposit costs initially lagged, but more recently moved with short-term interest rate hikes, the composition of a bank’s deposit base and funding structure has become increasingly important. As shown in Figure 4, the percentage of banks experiencing a rising cost of interest bearing deposits has steadily increased. Total funding costs have nearly doubled since year-end 2016 as depositors have reoriented funds toward accounts offering higher rates. Banks searching for funding either must engage in intense deposit competition or tap into higher-cost sources such as wholesale funding.
Going forward community banks may face a modest reduction in NIMs because the yield curve is flat and the cost of incremental funding is expensive. Some community banks will choose to slow loan growth in order to protect margins; others will accept a lower margin. The predicament demonstrates yet again why deposit franchises are a key consideration for acquirers as banks with low cost deposit franchises and excess liquidity are particularly attractive in the current market.
Originally published in Bank Watch, May 2019.
Lucas Parris, CFA, ASA-BV/IA, vice president, co-presented the session, “Employee Benefits Agency Consolidation and Valuation” with Mike Strakhov (Live Oak Bank) at the 2019 Workplace Benefits Renaissance Conference in Nashville, TN (February 20-22,2019).
A short description of the session can be found below.
Insurance agency merger and acquisition activity has been at historic levels for the past few years. Employee benefit agency transactions represent a significant number of these annually. This session will address the current state of agency consolidation including trends, who’s buying and who’s selling and the overall impact to employee benefits distribution. We’ll also identify and discuss the important characteristics that drive value of an employee benefits agency.
It has been 34 years since the Delaware Supreme Court ruled in the landmark case Smith v. Van Gorkom, (Trans Union), (488 A. 2d Del. 1985) and thereby made the issuance of fairness opinions de rigueur in M&A and other significant corporate transactions. The backstory of Trans Union is the board approved an LBO that was engineered by the CEO without hiring a financial advisor to vet a transaction that was presented to them without any supporting materials.
Why would the board approve a transaction without extensive review? Perhaps there were multiple reasons, but bad advice and price probably were driving factors. An attorney told the board they could be sued if they did not approve a transaction that provided a hefty premium ($55 per share vs a trading range in the high $30s).
Although the Delaware Supreme Court found that the board acted in good faith, they had been grossly negligent in approving the offer. The Court expanded the concept of the Business Judgment Rule to include the duty of care in addition to the duties to act in good faith and loyalty. The Trans Union board did not make an informed decision even though the takeover price was attractive. The process by which a board goes about reaching a decision can be just as important as the decision itself.
Directors are generally shielded from challenges to corporate actions the board approves under the Business Judgement Rule provided there is not a breach of one of the three duties; however, once any of the three duties is breached the burden of proof shifts from the plaintiffs to the directors. In Trans Union the Court suggested had the board obtained a fairness opinion it would have been protected from liability for breach of the duty of care.
The suggestion was consequential. Fairness opinions are now issued in significant corporate transactions for virtually all public companies and many private companies and banks with minority shareholders that are considering a take-over, material acquisition, or other significant transaction.
Although not as widely practiced, there has been a growing trend for fairness opinions to be issued by independent financial advisors who are hired to solely evaluate the transaction as opposed to the banker who is paid a success fee in addition to receiving a fee for issuing a fairness opinion.
While the following is not a complete list, consideration should be given to obtaining a fairness opinion if one or more of these situations are present:
A fairness opinion involves a review of a transaction from a financial point of view that considers value (as a range concept) and the process the board followed. The financial advisor must look at pricing, terms, and consideration received in the context of the market for similar banks. The advisor then opines that the consideration to be received (sell-side) or paid (buy-side) is fair from a financial point of view of shareholders (particularly minority shareholders) provided the analysis leads to such a conclusion.
The fairness opinion is a short document, typically a letter. The supporting work behind the fairness opinion letter is substantial, however, and is presented in a separate fairness memorandum or equivalent document.
A well-developed fairness opinion will be based upon the following considerations that are expounded upon in an analysis that accompanies the opinion:
It is important to note what a fairness opinion does not prescribe, including:
Due diligence work is crucial to the development of the fairness opinion because there is no bright line test that consideration to be received or paid is fair or not. Mercer Capital has nearly four decades of experience in assessing bank (and non-bank) transactions and the issuance of fairness opinions. Please call if we can assist your board.
Originally appeared in Mercer Capital’s Bank Watch, April 2019
Learning objectives include:
Karolina Calhoun, CPA/ABV/CFF, Vice President, presented “How to Value a Business & Situations That Give Rise to a Valuation” at the Tennessee Society of CPAs West Tennessee Chapter monthly meeting in Jackson, TN.
The valuation of a business can be a complex process, requiring accredited business valuation and forensic accounting professionals. This session will take a deep dive into the process and methodologies used in a valuation. Also covered will be the situations that give rise to valuation services such as estate/tax planning, ESOP annual valuation, M&A transactions, GAAP/ financial reporting, family law marital dissolution, buy-sell disputes, and corporate litigation.
In traditional divorces, each spouse engages a lawyer who fights hard to “win.” Their weapons can include bringing in their own financial professional to value financial assets. Naturally the neutrality of those valuations may be suspect in the other party’s eyes, even if the valuator follows all proper procedures. In collaborative divorce, each spouse still hires a lawyer, but the goal is to reach a settlement that satisfies each party. Neutral consultants, such as financial and mental health professionals, are also frequently involved. The model is “troubleshoot and problem-solve” rather than “fight and win.”
The collaboration is carried out through a series of meetings in which the couples and their attorneys negotiate over issues such as property division, alimony, child support and custody. The meetings are quarterbacked by the mental health professional, who prioritizes the goals for each session, monitors the emotional climate, and keeps things on track. The attorneys each are responsible to look out for the interests of their clients, but rather than using the law to win, they are more focused on making sure their clients understand the legal issues involved and how a court might view them. The role of the financial professional, who is paid by both parties, is to provide an objective assessment of the financial issues involved. If one of the spouses has a business, the financial neutral provides an arm’s-length valuation and can also serve to educate the other spouse about the business, if needed. After several meetings, the financial neutral produces a marital balance sheet, laying out the couple’s financial landscape.
While collaborative divorce is not for everyone, in the right settings it can have these advantages:
Divorces litigated through the court system can often take a year or more to reach a conclusion. The collaborative process can move faster because there is no waiting for motions to be filed and hearings to be held.
Attorneys likely will have fewer billable hours since there is less engagement with the courts. There is only one financial consultant rather than two. In addition, because litigated cases tend to take more time, there may be a need for revised valuations as economic conditions change while the divorce makes its way through the process.
While there certainly can be tension between the two spouses during the collaborative process, the temperature tends to be lower when the working model is problem-solving rather than fighting. The addition of a mental health professional to the team also can serve to defuse tensions, and the neutrality of the financial professional can serve to reduce distrust.
When divorce cases reach the courtroom, subjective judgments by the judge can come into play. While Tennessee law spells out guidelines for judges in divorces, they still have latitude.
Divorce settlements litigated through the courts become public record. Settlements that result from the collaborative process do not. This can be of particular importance when one or both spouses are high-profile.
Collaborative divorce is not for everyone. Sometimes distrust between the parties has become so intense that litigation is the only way out. However, many divorcing spouses have found that a collaborative process can reduce tensions and cost and provide a result satisfactory to both parties. Attorneys can benefit from numerous services provided by financial professionals in litigated and collaborative divorce matters. At Mercer Capital, we have two professionals who are trained in the Collaborative Practice and provide assistance to attorneys in collaborative and litigated divorce matters. Please contact us if we can be of assistance to you and your clients.
Originally published in Mercer Capital’s Tennessee Family Law Newsletter, First Quarter 2019.
A lifestyle analysis is an analysis of each party’s sources of income and expenses. It is used in the divorce process to demonstrate the standard of living during the marriage and to determine the living expenses and spending habits of each spouse. It is typically a more in-depth analysis than the financial affidavits required in the divorce process and is prepared by a forensic accountant. The details in the analysis serve as verification of net worth and income, and expense statements submitted by both spouses can help a judge determine the equitable distribution of marital assets as well as alimony needs.
The lifestyle analysis pulls together all considerations and provides a visual of income and expenses over the remaining life expectancy. Through illustration of the aggregate sources of income(s) and expenses over time, one can discern what funds are actually required (and if these funds are available) to maintain standard of living, i.e., to fund expenses. The exercise then yields relative analyses (percentage comparisons and trend analyses), and ultimately, an illustration of net worth at a point in time, as well as net worth accumulation over time.
In Tennessee, the Decree for support of spouse is under § 36-5-121(i). Careful consideration must be given to the factors listed in the statute when determining historical lifestyle (standard of living) as well as reasonable need into the future. Twelve factors assist in determining whether the granting of an order for payment of support and maintenance to a party is appropriate, as well as determining the nature, amount, length of term, and manner of payment. Refer to § 36-5-121(i) for the full listing.
Although each of the factors must be considered when relevant to the parties’ circumstances, the first factor, “the relative earning capacity, obligations, needs, and financial resources of each party, including income from pension, profit sharing or retirement plans and all other sources,” has presented the two most important components: the disadvantaged spouse’s need and the obligor spouse’s ability to pay.
Hence arises the “pay & need analysis,” also known as the “lifestyle analysis.”
The following documentation provides financial information used in the analysis and is typically requested during the discovery process.
There are many moving pieces in constructing the lifestyle analysis, and the components can be quite different from case to case. During the preliminary stages, the financial expert/ forensic accountant will obtain pertinent documents from the aforementioned documentation in order to create the marital balance sheet (and potential separate property) and assess historical and current earnings and expenses/spending habits. Additionally, the expert may also assist in building a budget based on historical expenses. The expert will review retirement plans and annual contributions, brokerage accounts, cash & savings accounts, their respective average rates of return as well as varying tax obligations. The risk tolerance of the individuals can even be considered in relation to future rates of return. For example, a person with ample disposable cash may be willing to invest in riskier ventures where the return may be higher, than a person who chooses to invest conservatively due to limited disposable cash.
The investigative process may even lead the parties to establish the “true income” of a spouse who is suspecting of perpetrating fraud and determine any possible hidden assets or dissipation of marital assets.
Ultimately, the lifestyle analysis illustrates the sources of income, tax obligations, and disposable cash before and after expected expenses. This tool is valuable because it leads to further analyses such as relative analyses of gross earnings comparisons and after-tax disposable cash comparisons, among others. The analysis allows comparison on relative terms not just dollar amounts.
Another valuable result of the lifestyle analysis is the ability to assess the parties’ net worth at multiple points in time. The net worth accumulation analysis illustrates the differences of the division of net worth at the date of divorce, and the division of net worth at the date of death. Additionally, it illustrates the net worth accumulation between those two points in time. This process may highlight what appears to be reasonable at a point in time, may or may not be reasonable when extracted over time. When used as trial demonstratives, the illustration can assist the trier of facts in determining the disadvantaged spouse’s need and the obligor spouse’s ability to pay.
For a fact pattern and step-by-step illustration, refer to my Lifestyle / Pay & Need Analysis presentation from the 2018 AICPA Forensic & Business Valuation Conference.
In financial situations that may be scrutinized by regulators, courts, tax collectors, and a myriad of other lurking adversaries, the financial, economic, and accounting experience and skills of a financial expert are invaluable. The details in the lifestyle analysis can help determine the equitable distribution of marital assets as well as alimony needs.
Because no two cases are alike, all components of the analysis must be carefully assessed. Complexities that may need further consideration include, but are not limited to:
A competent financial expert will be able to define and quantify the financial aspects of a case and effectively communicate the conclusion. For more information or to discuss your matter, please don’t hesitate to contact us.
Originally published in Mercer Capital’s Tennessee Family Law Newsletter, First Quarter 2019.
I ventured into the Arizona desert again this year to Bank Director’s Acquire or Be Acquired Conference (“AOBA”) in Phoenix in late January. This year I was struck by the dichotomous outlook for the banking sector that reminded me of Dicken’s famous line: “It was the best of times, it was the worst of times…”
The weather was lovely. Phoenix/Scottsdale is the place to be in late January, and this year did not disappoint with sunny weather and a high of around 70 each day. At the same time, much of the country was feeling the effects of a severe polar vortex that caused temperatures to plunge well below zero in the Upper Midwest and Great Plains. Many of the attendees from that area were forced to stay a day or two longer due to airline cancellations.
The operating environment for banks reflected a similar analogous dichotomy. Take the market for example. Most banks produced very good earnings in 2018, and many produced record earnings due to a good economy, the reduction in corporate tax rates, and margin relief as the Fed raised short-term interest rates four times further distancing itself from the zero interest rate policy (“ZIRP”) implemented in late 2008.
Nonetheless, bank stocks, along with most industry sectors, were crushed in the fourth quarter. The SNL Small Cap US and Large Cap US Bank Indices declined 16% and 17% respectively. Several AOBA sessions opined that valuations based on price-to-forward earnings multiples were at “financial crisis” levels as investors debated how much the economy could slow in 2019 and 2020 and thereby produce much lower earnings than Wall Street’s consensus estimates.
Within the industry the best of times vs. worst of times (or not as good of times) theme extended to size. Unlike past eras when small (to a point) was viewed as an advantage relative to large banks, the consensus has flipped. Large banks today are seen as having a net advantage in creating operating leverage, technology spending, better mobile products for the all-important millennials, and greater success in driving deposit growth.
Additionally, one presenter noted that larger publicly traded banks that are acquisitive have been able to acquire smaller targets at lower price/tangible book multiples than the multiple at which the shares issued for the target trade in the public market and thereby incur no or minimal dilution to tangible BVPS.
The most thought provoking sessions dealt with the intensifying impact of technology. Technology is not a new subject matter for AOBA, but the increasingly larger crowds that attended technology-focused sessions demonstrated this issue is on the minds of many bankers and directors. While technology is a tool to be used to deliver banking services, I think the unasked question most were thinking was: “What are implications of technology on the value of my bank?”
Several sessions noted big banks that once hemorrhaged market share are proving to be adept at deposit gathering in larger metro markets while community banks still perform relatively well in second-tier and small markets. Technology is helping drive this trend, especially among millennials who do not care much about brick-and-mortar but demand top-notch digital access. The efficiency and technology gap between large and small banks is widening according to the data, while both small and large banks are battling new FinTech entrants as well as each other.
Not all technology-related discussions were negative, however. Digital payment network Zelle (owned indirectly by Bank of America, BB&T, Capital One, JPMorgan Chase, PNC, US Bank, and Wells Fargo) has grown rapidly since it launched in 2017. Payment volume in dollar terms now exceeds millennial-favorite Venmo, which is owned by PayPal. Also, JPMorgan Chase rolled-out a new online brokerage offering that offers free trades for clients in an effort to add new brokerage and banking clients while also protecting its existing customer franchise.
In addition to the best of times/worst of times theme, I picked up several ideas about what actions banks large and small can take to create value.
There was a standing room only crowd for the day one FinXTech session: “The Next Wave of Innovation.” This stood in stark contrast to the first AOBA conference I attended which was during the financial crisis. Technology was hardly mentioned then and most sessions focused on failed bank acquisitions. Clearly, this year’s crowd proved that technology is top of mind for many bankers even if the roadmap is hazy. A key takeaway is that a digital technology roadmap must be weaved into the strategic plan so that an institution will be positioned to take advantage of the opportunity that technology creates to enhance customer service and lower costs. Further, emerging trends suggest that technology may help in assessing credit risk beyond credit scores. To assist banks in creating a FinTech roadmap, Bank Director recently unveiled a new project called FinXTech Connect that provides a tool bankers can use to consider and analyze potential FinTech partners.
During our (Mercer Capital) session, Andy Gibbs and I argued for becoming a “triple threat” bank, noting that banks with higher fee income, superior efficiency ratios, and greater technology spending were being rewarded in the public market with better valuations all else equal (see table below). While we do not advocate for heavy tech spending as a means to an ill-defined objective, the evidence points to a superior valuation when technology is used to drive higher levels of fee income and greater operating leverage. For more information, view our slide deck.
While there was a lot of discussion about an eventual slowdown in the economy and an inflection in the credit cycle, several sessions highlighted that a downturn will represent the best opportunity for those who are well prepared to grow. The key takeaway is to have a plan for both the good and the bad economic times to seize opportunity. Technology can play a role in a downturn by helping add customers at very low incremental costs.
On the M&A front, two M&A nuggets from attorneys stood out as well as a note about MOEs (mergers of equals):
We will likely be back at AOBA next year and hope to see you there. In the meantime, if you have questions or wish to discuss a valuation or transaction need in confidence, don’t hesitate to contact us.
By now, many are familiar with the changes from the Tax Cuts and Jobs Act (TCJA), however, specific changes related to family law and alimony deductibility went into effect in 2019.
We discussed many of these in a prior newsletter. The changes are as follows.
For more information, see this helpful reference.
Originally published in Mercer Capital’s Tennessee Family Law Newsletter, First Quarter 2019.
Developing a fintech strategy for your bank to enhance profitability, efficiency, shareholder value and customer satisfaction can be challenging. This session helps to navigate fintech and develop a fintech strategy; provides case studies of successful partnerships between community/regional banks and fintechs; and give an overview of fintech valuation and M&A trends.
FinTech companies are the emerging and hyped sector of the financial services industry. Looking at FinTech’s recent activity, people can see that many of these companies begin as start-ups and a few exciting years later, are able to raise millions of dollars in hopes of becoming the next “unicorn” – an industry term describing a tech company valued at a billion dollars or more. While this business trajectory may seem simple and attractive, FinTech companies usually have a highly complex structure made up of many investors of different origins, including venture, corporate, and/or private equity, all with different preferences and capital structures.
Valuing a FinTech company can be very complicated and difficult, but carries important significance for employees, investors, and stakeholders for the company. While all Fin-Tech companies have large differences, including what niche (payments, solutions, technologies, etc.) they operate in or what stage of development the company is in, understanding the value of a FinTech company is critically important. More specifically, within the FinTech industry, an exciting niche termed InsurTech is emerging and threatening to change the traditional state of the insurance industry.
InsurTech is a fast growing niche that operates in a massive global insurance industry with premium revenues of about $5 trillion annually. InsurTech is the term applied to many companies that are using technology to disrupt the traditional insurance industry landscape. InsurTech has high growth prospects and the potential for InsurTech to innovate and disrupt remains large. Funding for InsurTech companies in the recent years has spiked, especially for early-stage companies. Incumbents in the insurance industry have been slow to adopt disruptive, high-growth InsurTech, partly because insurance is so massive and has been around for such a long time. Additionally, many traditional insurance companies can benefit from InsurTech solutions that serve to enhance customer satisfaction and improve efficiency of operations by leveraging technology and enhancing the delivery of certain insurance offerings and solutions through digital channels.
Technology and innovation have disrupted many other long-established industries, such as the impact of medical technology in the healthcare industry. Insurance players, who maintain legacy systems believe that established customer connections will reduce the threat of InsurTech. However, this may not be the best strategy because insurance is often purchased begrudgingly. The historically strained relationship between customers and carriers is a rather vulnerable point along the insurance value chain. InsurTech companies can offer innovative technology that creates more touch points for customers and reduces many customer pain points.
Understanding how well a given InsurTech company is doing within this FinTech niche is one of the most important factors in determining its value. Market dynamics such as market size, potential market available, and growth prospects are important to understand. A valuation will consider absolute market value, existing competitors, and existing incumbents.
The regulatory environment is another important consideration when valuing an InsurTech company. Financial services, such as banks and insurance companies, are heavily regulated, so understanding the rules and regulations is necessary for developing an accurate valuation.
Like other FinTech niches, certain solutions within InsurTech are relatively new and have the potential to disrupt the entire insurance industry. Since many industry incumbents have been slow to adopt this new technology, the range of this innovation has yet to be fully felt and rules/regulations have yet to change. While regulatory stability may seem favorable now, concrete rules and regulations are complex and can be hard to predict as regulators react to rising InsurTech involvement. Understanding these complexities is important to valuing InsurTech companies, as these regulations could help or hinder an InsurTech’s growth potential.
When valuing a startup, quantitative information (financial and operating history) is limited; therefore, qualitative information can be extremely important in determining a company’s value. The quality and experience of the management team can be important. Knowledge of the insurance industry including understanding customer preferences, technology integration, the competitive and regulatory environments can enhance an InsurTech’s company value.
An InsurTech company’s ownership of intellectual property and other intangible assets, like strategic partnerships, all else equal, should be considered and could increase a company’s value, assuming they are in place and well documented. When in place and demonstrated, intangibles are an important qualitative consideration.
The stage of development of a FinTech company can also impact its value. Companies typically set milestones and track their own progress, and meeting these milestones might affect their valuation. Milestones usually include initial round financing, proof of concept, regulatory approval, obtaining a significant partner, and more.
Milestones are important to set and track as the more milestones a startup meets, the less uncertainty exists and the more value is created. For example, an InsurTech company with established technology, increased customer touch points, and the potential to increase revenues will be more valuable to a potential acquirer than a newer startup. In addition, meeting later stage milestones often provide greater value than meeting early stage milestones. When the valuation considers future funding rounds and the potential dilution from additional capital raises, a staged financing model is often prepared and the valuation will vary at different stages as shown below.
As InsurTech companies enhance business operations and reduce costs, valuations for these companies will become more important. There are three common approaches to determining business value: asset approach, income approach, and market approach. Each valuation approach is typically considered and then weighted accordingly to provide an indicated value or a range of value for the company, and ultimately, the specific interest or share class of the company.
The asset approach determines the value of a business by examining the cost that would be incurred by the relevant party to reassemble the company’s assets and liabilities. This approach is generally inappropriate for technology startups as they are generally not capital intensive businesses until the company has completed funding rounds. However, it can be instructive to consider the potential costs and time that the company has undertaken in order to develop proprietary technology and other intangibles.
The market approach determines the value of a company by utilizing valuation metrics from transactions in comparable companies or historical transactions in the company. Consideration of valuation metrics can provide meaningful indications for startups that have completed multiple funding rounds, but can be complicated by different preferences and rights with different share classes.
Regardless of complications, share prices can provide helpful valuation anchors to test the valuation range. Market data of publicly traded companies and acquisitions can be helpful in determining key valuation inputs for InsurTech companies. For early-stage companies, market metrics can provide valuable insight into potential valuations and financial performance once the InsurTech company matures. For already mature enterprises, recent financial performance can be compiled to serve as a valuable benchmarking tool.
Investors can discern how the market might value an InsurTech company based on pricing information from comparable InsurTech companies or recent acquisitions of comparable InsurTech companies.
The income approach can also provide a meaningful indication of value for a FinTech company. This relies on considerations for the business’ expected cash flows, risk, and growth prospects.
The most common income approach method is the discount cash flow (DCF) method, which determines value based upon the present value of the expected cash flows for the enterprise. The DCF method projects the expected profitability of a company over a discrete period and prices the profitability using an expected rate of return, or a discount rate. The combination of present values of forecasted cash flows provides the indication of value for a specific set of assumptions.
For startup InsurTech companies, cash flow forecasts are often characterized by a period of operating losses, capital needs, and expected payoffs as profitability improves or some exit event, like an acquisition, occurs. Additionally, investors and analysts often consider multiple scenarios for early-stage companies both in terms of cash flows and exit outcomes (IPO, sale to a strategic or financial buyer, etc.), which can lead to the use of a probability weighted expected return model (PWERM) for valuation.
Given their complexity, multiple valuation approaches and methods are often considered to provide lenses through which to assess value of InsurTech and FinTech companies and generate tests of reasonableness against which different indications of value can be evaluated.It is important to note that these different methods are not expected to align perfectly. Value indicators from the market approach can be rather volatile and investors often think longer-term. More enduring indicators from value can often come from income approaches, such as DCF models.
Valuation of an InsurTech company can be vital to measure realistic growth, to plan progression, and to secure employee and investor interest. Given the complexities in valuing privateFinTech and InsurTech companies and the ability for the market/regulatory environment to shift quickly, it is important to have a valuation expert who can adequately assess the value of the company and understand the prevalent market trends.
Last week, the Mercer Capital Bank Group headed south for a scenic trip through the fields of the Mississippi Delta, including the town of Clarksdale located about 90 miles from Memphis. Clarksdale’s musical heritage runs deep with such performers as Sam Cooke, John Lee Hooker, Son House, and Ike Turner born there, while Tennessee Williams spent much of his childhood there. Explaining the Delta’s prolific artistic output, Eudora Welty, a Mississippi writer, noted the landscape stretching to the horizon and the juxtaposition of societal elements – all these forces churning like the Mississippi river nearby.
Despite its gritty roots, Clarksdale now is experiencing its own hipster renaissance. It may not be Brooklyn, but the Bank Group noticed signs for last weekend’s Clarksdale Film Festival. Visitors can stay at a refurbished cotton gin, enjoying their Sweet Magnolia Gelato made from locally sourced ingredients. Presumably, craft cocktails are available as well, this being the Delta.
Beyond these recent additions to the tourist landscape, though, one attraction put Clarksdale on the map – the Crossroads. At the intersection of Highways 49 and 61, the bluesman Robert Johnson (who lived from 1911 to 1938), as the story goes, met the Devil at midnight who tuned his guitar and played a few songs. In exchange for his soul, Johnson realized his dream of blues mastery.
The point of this article is not that Lucifer lurked behind the revaluation of asset prices in the fourth quarter of 2018. Instead, the market gyrations laid bare the dichotomy between bank expectations regarding asset quality and the market’s view of mounting credit risk that was overlaid by a need to meet margin calls among some investors. Indeed, credit quality faces its own crossroads.
Along Hwy. 49 lies the town of Tutwiler, about 15 miles from Clarksdale. There, in 1903, the bandleader W.C. Handy heard a man playing slide guitar with a knife, singing “Goin’ where the Southern cross’ the Dog.” Handy adapted the song, which references the juncture of two railroads, thereby making it one of the first blues recordings.
From Call Report data, which includes 3,644 banks with total assets between $100 million and $5 billion, signs of credit quality deterioration remain virtually undetectable.
As they are wont to do, regulatory agencies noted some concerns regarding asset quality. However, consistent with our research into the community banking industry’s asset quality trends, the OCC also observed that “credit quality remains strong when measured by traditional performance metrics.”1 Despite its view of building credit risk, the OCC rated 95% of banks’ underwriting practices as satisfactory or strong in 2018, virtually unchanged from the 2017 level.2 Economic growth, corporate profits, and employment trends also support a sanguine view of credit quality.
While also observing weaker underwriting – for example, covenant concessions – rating agencies predict better credit performance among leveraged loans and commercial mortgage backed securities in 2019. For 2019, Fitch Ratings projects a 1.5% leveraged loan default rate, down from 1.75% in 2018. Further, commercial mortgage-backed security delinquencies, which declined by 103 basis points to 2.19% between year-end 2017 and 2018, are expected to range between 1.75% and 2.00% in 2019. The Amazonification of the retail sector, which led to retail bankruptcies and defaults on loans secured by regional malls, contributed to higher delinquency and default rates in 2018 but may subside in 2019.
The view from Hwy. 49, before reaching the Crossroads, looks favorable from the banking industry’s standpoint.
In the words of the writer David Cohn, the Mississippi Delta begins in the lobby of the Peabody Hotel (in Memphis) and ends on Catfish Row in Vicksburg, Mississippi.3 While his observation alludes to the economic as well as the geographic extremes of the Delta region, Highway 61 is the Delta’s spine connecting Cohn’s poles.
One of the more concerning statistics is the level of corporate debt. Though household debt trended down following the Great Recession (see Figure 4), nonfinancial business debt has reached near record levels as a percentage of GDP.4 According to Morgan Stanley, BBB-rated corporate debt surged by 227% since 2009 to $2.5 trillion. This leaves approximately one-half of the investment grade corporate bond universe on the cusp of a high-yield rating. Moody’s migration data suggests that BBB-rated bonds have an 18% chance of being downgraded to non-investment grade within five years, which may overwhelm the high-yield bond market.5
Regulatory agencies also observed looser underwriting. For new leveraged loans, the Federal Reserve noted that the share of highly leveraged large corporate loans – defined as more than 6x EBITDA – exceeds previous peak levels in 2007 and 2014, while issuers also are calculating EBITDA more liberally by making aggressive adjustments to reported EBITDA.6 From the OCC’s perspective, competitive pressures from banks and non-banks, along with plentiful investor liquidity, have led to weaker underwriting particularly among C&I and leveraged loans. According to the OCC, community banks are not immune. An example of weaker underwriting cited by the OCC is “general commercial loans, predominately in community banks” for which it compiles a list of shortcomings: “price concessions, inadequate credit analysis or loan-level stress testing, relaxed loan controls, noncompliance with internal credit policies, and weak risk assessments.”7
Despite unemployment rates below 4% and some evidence of rising wages, consumer loan delinquency rates have risen in 2018 (Figure 5). Some lenders, such as Discover, already have begun reducing exposure to heated sectors like unsecured personal loans.
Fears of a downturn crystallized in the fourth quarter of 2018 with the Federal Reserve’s December rate increase, trade friction with China, and signs of economic slowdowns in countries such as Germany. Option-adjusted spreads on corporate debt, after remaining quiescent through 2017 and most of 2018, widened suddenly, approaching levels last observed in 2016 when oil prices collapsed (Figure 6). According to Guggenheim, the fourth quarter spread widening implies a 3.2% high yield corporate debt default rate, up from 1.8% for 2018.8
The perspective gleaned from Hwy. 61 is not necessarily alarming, but it does suggest that, directionally, risk is rising.
Credit lies at a crossroads, consistent with a late cycle economic environment. Reported credit metrics are not improving significantly, nor are they worsening; conditions suggest continued low charge-offs and loan loss provisions in the nearterm. However, the market sniffs rising risks in various corners of the economy, most notably in corporate debt. Howlin’ Wolf sang, “Well I’m gonna get up in the morning // Hit the Highway 49.” Where are banks headed? Macroeconomic conditions ultimately will be determinative, but banks should avoid complacency in this environment marked by conflicting signals and aggressive competition. The poorest loans, in retrospect, often are originated in times such as these.
1 OCC Semiannual Risk Perspective, Fall 2018, p. 1.
2 OCC Semiannual Risk Perspective, Fall 2018, p. 22.
3 Cohn, David, Where I Was Born and Raised, 1948.
4 Federal Reserve, Financial Stability Report, November 2018, p. 18.
5 Guggenheim Investments, Fixed Income Outlook, Fourth Quarter 2018, pp. 1 and 8.
6 Federal Reserve, Financial Stability Report, November 2018, p. 20.
7 OCC Semiannual Risk Perspective, Fall 2018, pp. 11 and 24.
8 Guggenheim Investments, High Yield and Bank Loan Outlook, January 2019.
This presentation was delivered by Scott A. Womack, ASA, MAFF and Cheryl C. Panther, CPA/PFS, ADFA/CDFA (Panther Financial Planning) 19th Annual Networking and Educational Forum hosted by the International Association of Collaborative Professionals.
This session, “Creativity in Financial Elements of a Collaborative Divorce,” is described below.
Financial creativity is not just for financial professionals. We will highlight actual case examples of how to strategically and efficiently use outside professionals and unique ways to utilize financial information. We’ll provide ideas that professionals from all disciplines can take back to their local practice groups.
This presentation was delivered by Karolina Calhoun, CPA/ABV/CFF at the AICPA 2018 Forensic & Valuation Services Conference.
Learning objectives include: