Search

6 October 2009 • 7:00 am

Turning Sense into Dollars – Part II

In the previous post, I introduced a case which offers a practical, real world example of how risk analysis can enrich the strategic planning process. We learned of PrimeCorp (a disguised name), a large company with a national presence in the U.S., and met Jim and Curtis, PrimeCorp’s head of Strategic Planning and CEO, respectively. If you haven’t read Part I of this series of posts, please do so now. It contains background needed to understand this and the following posts.

The Different Approach

We already knew that a key element of PrimeCorp’s existing strategic planning process was its financial forecasts. The annual planning book (hundreds of pages, highly confidential, and not shared beyond the executive team), contained page after page of spreadsheets describing past and expected future performance of each of PrimeCorp’s several divisions, as well as an enterprise-wide roll-up of the numbers. The executive team, which consisted of the heads of each division (as well as such corporate functions as HR Finance, and IT) annually created their individual division forecasts as a function of past performance, and their own expectations of the next five years of future results. This process was time-consuming and filled with understandable tension – between division leaders’ desire to soft-peddle the numbers, and CEO and board pressure to raise revenue and manage costs to achieve year-over-year improvement in profitability.

What was evident in the forecast numbers was their inability to capture the uncertainty in establishing them. This was our opening to capture the value of our improving PrimeCorp’s strategic planning process. From Part I, you’ll recall that I challenged Curtis and Jim to pay us $25,000 for a couple of weeks of effort to make the case for our proposal. I said, “Instead of just looking at your forecasts as a set of fixed numbers, we’ll consider the likelihood of the forecasts playing out. Our firm won’t introduce any numbers into the calculation – we’ll just use your and your team’s own expectations.” Curtis accepted the approach, and on a handshake, we got started that day.

Choosing the Levers

Like any business, PrimeCorp’s profitability is the result of inputs. Looking at the forecasts, it became clear that for each division, profit was the result of revenue, cost of goods sold, operating expense, and the cost of invested capital. To improve profit, a division had to accomplish some combination of increasing revenue, managing the cost of inputs and transforming them to outputs (the cost of goods sold, or COGS), manage its overhead costs (such as corporate overhead and IT), or reduce the cost of invested capital (primarily in each division’s necessary and sizable inventory). Of course, each of these four inputs were themselves the result of a multitude of contributing factors. But to manage complexity, we chose only these four inputs for each division as the appropriate level of detail for the exercise.

Capturing Uncertainty

Having both forecast and actual results for each division in several prior years, it was easy to judge the quality of PrimeCorp’s forecasting process. Revenue and capital cost were easier to forecast than COGS and operating expense, and variation in all resulted in years with significant over- and under-performance to the forecast, as well as years near forecast. There was no apparent bias in the forecasts; performance had been significantly above or below plan about the same number of times over the past ten years.

To quantify the uncertainty in the forecast for the next fiscal year, we imagined repeatedly ‘rolling the dice’ on the outcomes of the upcoming year. Sometimes the outcomes would be below the forecast, sometimes above. Ranking all of these hypothetical outcomes from poorest to best, we could then put these outcomes in three buckets; the bucket for the poorest 25% of them would be called ‘below plan,’ the middle 50% of them would be in a bucket called ‘as planned,’ and bucket for the best 25% would be called ‘above plan.’ Note that this was not an exercise in predicting the likelihood of any of these outcomes – by fixing the probabilities for each bucket, we could then ask a different pair of questions, “how bad would a ‘below plan’ year look, and how good would an ‘above plan’ year look? We assigned the original forecast values of revenue, COGS, operating and capital costs to the ‘as planned’ scenario. The first key input to the model were answers to the question, “what would the four inputs look like in the ‘below’ and ‘above’ plan year scenarios?”

To capture these inputs, my hope was to interview each division head and capture their own expectations for next year. But Curtis was less comfortable with the subjectivity of that approach, and instead we turned to past history. By comparing the forecast vs. actual outcomes for the four inputs in each of the last ten years of history, we could come up with reasonable estimates of the inputs parameters for the ‘above’ and ‘below’ scenarios (for those with statistics backgrounds: we actually determined the standard deviation for each of the four inputs, and used the forecast plus or minus one standard deviation for the parameters). Historically, above plan years happened when revenue increased and costs, especially operating costs, were held in check. Below plan years were more likely to be the result of higher COGS, not significantly reduced revenue. Although the likelihood of a below and above plan year were equal in the model, we learned that above plan years were only modestly above plan, while below plan years could be significantly below plan.

Rolling Up to a Revelation

Pulling all the input parameters together made it possible to create a risk-adjusted forecast for the upcoming year (and years beyond). The first revelation was that the risk-adjusted forecast was lower than the original forecast.

Next: Putting the Model to Work

Comments are closed.