18 September 2009 • 7:00 am

Leading Questions

At the center of the balanced scorecard concept is the observation that measures of organizational performance have traditionally been lagging indicators; measurement of actual performance after the fact. Management accounting is focused on describing performance during a time period that has ended – last quarter, last year, year-to-date, etc. And while there is nothing inherently wrong with lagging measures, they are of limited use to an organization’s leaders. All they do is tell what has already happened.

The ‘balance’ in balanced scorecard refers to the ideal of providing leaders with a balanced portfolio of lagging and leading performance indicators. Leading indicators are valuable because they help managers form an expectation of what will happen, and enable testing of the cause-and-effect hypotheses that are at the core of the strategic planning process. But identifying candidate leading indicators and selecting from among them requires careful consideration and a healthy skepticism of apparently easy answers.

But this careful consideration requires familiarity with the concept of causation. Having just pulled down my trusty copy of The Practice of Social Research – a textbook from an undergrad class I took too many years ago, I was daunted by the prospect of condensing a lengthy entire chapter on the nature of causation to offer a brief understanding here. Wikipedia’s definition is far more concise:

Causation: The belief that events occur in predictable ways and that one event leads to another. If the relationship between the variables is non-spurious (there is not a third variable causing the effect), the temporal order is in line (cause before effect), and the study is longitudinal, it may be deduced that it is a causal relationship.

When working with executives, I find real-world examples to be especially useful. One of my favorites offers a measurement of housing starts (a statistic produced monthly by the U.S. Census Bureau) as a leading indicator of the sales of so-called “white goods” (a wonderful old term meaning major appliances, like refrigerators, washing machines, stoves, etc.). An uptick in housing starts is an extremely reliable leading indicator of a nearly identical uptick in sales of white goods several months later. As those new housing units near completion they are furnished with new appliances. So if you’re in the business of making refrigerators, you’re going to be very interested in tracking housing starts.

Unfortunately, causation isn’t usually that obvious. One client of mine (some details have been changed) operated an inbound call center for taking resort hotel reservations. As part of a broad strategy, this firm undertook an initiative to increase revenue by cross-selling reservations for nearby dining and attractions, along with the core hotel reservations. Call-center agents whose performance had previously been measured as calls handled per hour were now expected to engage callers in more personal conversations about their vacation plans, in order to find opportunities to cross-sell the associated reservations. The new performance measures became average call duration (the hypothetical leading indicator that was expected to increase) and average revenue per call (the lagging indicator). The strategic hypothesis: keep customers on the phone longer, and they’ll buy (reserve) more.

It didn’t work out that way. Even though the call-center agents had been trained extensively in cross-selling, the majority of customers calling in were not easily engaged in the longer, more personal conversations. These customers called expecting a brief transaction (the hotel reservation) and were impatient with the questions. But a subset of customers calling in were quite happy to chat with the friendly reservations agents. Unfortunately, these chatty customers turned out to be especially unlikely to make dining and attraction reservations, and ultimately reduced the productivity of the agents.

A continuing theme in my posts has been that of strategy as a set of hypotheses to be tested. A tested hypothesis proven to be false is just as valuable as one proven true. Measure such as those in the example above enabled managers to quickly identify flaws in the hypothesis, and revise the strategy. The value of a measure is in its ability to enable valuable decisions.

Do you have a favorite example of a pair of leading / lagging indicators that you can share? Please comment below.

Comments are closed.