22 September 2009 • 7:00 am

Five Traps of Performance Measurement

Sir Andrew Likierman

Sir Andrew Likierman

An unusually practical article appears in the October issue of Harvard Business Review on the topic of performance measurement. I regret that I can’t share a link with you, because HBR content is not available online, except to subscribers of the magazine (perhaps the folks at Harvard haven’t yet read about the idea of Free). No matter. Though I can’t share the article itself with you, at least I can summarize it here.

Entitled The Five Traps of Performance Measurement, Andrew Likierman’s article is concise and valuable. Sir Andrew Likierman is no less than the Dean of the London Business School, a non-executive director of Barclay’s Bank, and Chairman of the UK’s National Audit Office. He knows of what he writes.

As strategists and change agents, we’re all part of the process of performance measurement and management in our organizations. The task of measurement is neither easy, nor especially satisfying, at least in the short run. And those charged with measurement easily fall prey to pitfalls and traps. Sir Andrew highlights five that are most common.

  1. Measuring Against Yourself. While it is important to understand change in performance measures from one period to the next, it is vital to understand how the organization is doing relative to its competitors. While competitive benchmarks may be difficult and potentially costly to obtain, there are ways to understand how one is doing relative to the competition. Likierman offers the example of Enterprise Rent-A-Car’s Quality Index, which captures customers’ plans for future rentals. From a random telephone survey of recent customers, Enterprise is able to project future increases or decreases in market share.  
  2. Looking Backward. Comparisons with last year’s numbers aren’t useful if you’re not also looking at leading indicators that help make better decisions now to predict future performance. Likierman moves from this assertion to a discussion of the quality of management decisions as a leading indicator of success. He cites one European investment bank that tracks the eventual outcome of deals the bank chose not to do; if they turn out to be bad deals, the no-go decision is rated as a success. It is as important to focus on what the organization chooses not to do as it is to track what is done.
  3. Putting Your Faith in Numbers. Likierman asserts that numbers-driven managers tend to produce high volumes of low-quality data. There is a tendency to use popular measures (ones that may be fashionable in an industry), rather than choosing the right ones for the firm’s unique strategy. An example given is that of the Net Promoter Score, the likelihood that a customer will recommend a product or service to others. But the NPS is useful only if customers are likely to make purchase decisions on the basis of a recommendation. Another symptom of this trap is trying to fashion links to financial performance when no tangible links exist; the apparent ROI of such service functions as HR or especially IT is far more circumstantial than real. Most of the numbers that I’ve seen used in IT ROI calculations are somewhere between rough estimates and wishful thinking – certainly not reliable enough for making decisions.
  4. Gaming Your Metrics. It comes as no surprise that people whose performance is being measured will attempt to influence those numbers. This happens at an institutional level, not just with individuals. As Likierman reports, Royal Dutch Shell has paid $470 million since 2004 to settle lawsuits contending that Shell overstated its reserves. Morgan Stanley was reported to have deliberately lost a €20 million deal in order to improve its position in a global ranking. And who hasn’t seen sales behavior change at the end of the fiscal year, either trying to book business in the current year, or delay booking until the following year? All of these are examples of deliberate decisions to manipulate measurements. Likierman accepts the reality that gaming will happen, but prescribes a diversity of indicators that would be harder to manipulate; law firm Cliffor Chance changed from simply measuring billable hours to a portfolio of seven criteria on which to base bonuses; including measures of work quality and integrity.
  5. Sticking to Your Numbers Too Long. Once instituted, measurement systems become part of enterprise culture, and evolve more slowly than the organization itself. By deliberately stating the purpose of a measure (and connecting to a strategic intent, as in the balanced scorecard process), leaders are more likely to question an indicator that has outlived its usefulness. To me, the sign of a healthy strategic management process is one in which measures are easily and frequently challenged, and the portfolio of measures is frequently revised.

Likierman concludes by reminding us that most business managers are not experts in performance measurement, and that line managers suffer from hopeless conflicts of interest in the measures design process. Measures design must include checks and balances, and (I believe) is best enabled by an outside, objective facilitator.

Are these pitfalls present in your organization? What are the other traps you’ve seen? Please comment below.

Comments are closed.