22 September 2009 • 7:00 am

Five Traps of Performance Measurement

Sir Andrew Likierman

Sir Andrew Likierman

An unusually practical article appears in the October issue of Harvard Business Review on the topic of performance measurement. I regret that I can’t share a link with you, because HBR content is not available online, except to subscribers of the magazine (perhaps the folks at Harvard haven’t yet read about the idea of Free). No matter. Though I can’t share the article itself with you, at least I can summarize it here.

Entitled The Five Traps of Performance Measurement, Andrew Likierman’s article is concise and valuable. Sir Andrew Likierman is no less than the Dean of the London Business School, a non-executive director of Barclay’s Bank, and Chairman of the UK’s National Audit Office. He knows of what he writes.


18 September 2009 • 7:00 am

Leading Questions

At the center of the balanced scorecard concept is the observation that measures of organizational performance have traditionally been lagging indicators; measurement of actual performance after the fact. Management accounting is focused on describing performance during a time period that has ended – last quarter, last year, year-to-date, etc. And while there is nothing inherently wrong with lagging measures, they are of limited use to an organization’s leaders. All they do is tell what has already happened.

The ‘balance’ in balanced scorecard refers to the ideal of providing leaders with a balanced portfolio of lagging and leading performance indicators. Leading indicators are valuable because they help managers form an expectation of what will happen, and enable testing of the cause-and-effect hypotheses that are at the core of the strategic planning process. But identifying candidate leading indicators and selecting from among them requires careful consideration and a healthy skepticism of apparently easy answers.


16 September 2009 • 7:00 am

Innumeracy and The Flaw of Averages

caption text

A classic example of the Flaw of Averages involves the Statistician who drowned crossing a river that was on average 3 ft. deep.

Desperately casting around for a topic to write about today, I was grateful to see a link to an interview in the San Jose Mercury News with Stanford professor Sam L. Savage about his book, The Flaw of Averages (great title!). I’ve not read the book yet, but the review has certainly piqued my interest:

How does General Motors, Sam L. Savage wonders, explain the pathetic performance of its crystal ball? When Americans started driving hybrids, GM was still pushing Hummers. Executives at the giant carmaker — fully aware of union contracts, presumably prepared for rising gasoline prices and economic uncertainty — drove straight into the ditch of bankruptcy.

“Probability management” is often mismanaged by business leaders, says Savage, a consulting professor of management science and engineering at Stanford University and a fellow at the Judge Business School at the University of Cambridge. Savage, who has performed probability studies for Royal Dutch Shell, set out to right statistical wrongs in his book “The Flaw of Averages.”

The Information Age has transformed statistics into a vital field of study, yet Savage says many habits and practices have been slow to change from the “steam era statistics” of the Industrial Age.

Written for a business audience, “The Flaw of Averages” leavens the math with levity, even the occasional cartoon.

Well alright then. Working with business executives and their measures for so many years, I continue to be amazed at how easily decisions are made on the basis of numbers with little consideration for the risks and consequences of those decisions. I’ve been meaning to write at some length about the need for the discipline of risk management in change programs, but before doing so, we all need to take a deep breath and consider the magnitude of our collective innumeracy.

The topic has been covered before. I just pulled Innumeracy: Mathematical Illiteracy and its Consequencesby John Allen Paulos from my bookshelf, and thumbing through it, I remember how much I appreciated the book, but that is wasn’t the easiest read. From the back cover description:

Why do even well-educated people understand so little about mathematics? And what are the costs of our innumeracy? John Allen Paulos, in his celebrated bestseller first published in 1988, argues that our inability to deal rationally with very large numbers and the probabilities associated with them results in misinformed governmental policies, confused personal decisions, and an increased susceptibility to pseudoscience of all kinds. Innumeracy lets us know what we’re missing, and how we can do something about it.

Sprinkling his discussion of numbers and probabilities with quirky stories and anecdotes, Paulos ranges freely over many aspects of modern life, from contested elections to sports stats, from stock scams and newspaper psychics to diet and medical claims, sex discrimination, insurance, lotteries, and drug testing. Readers of Innumeracy will be rewarded with scores of astonishing facts, a fistful of powerful ideas, and, most important, a clearer, more quantitative way of looking at their world.

SinceI found that Innumeracy was not especially accessible, I haven’t yet found occasion to use examples from it. Perhaps The Flaw of Averages will be better. It looks promising. From the interview with Savage:

Q. What are the most common ways people foolishly apply the law of averages? Is it the faith placed in “average returns” on retirement portfolio?

A. Plenty of people have been caught off base by the Flaw of Averages in investing, but here is an example that is closer to home. Imagine that both you and your wife are right on time for appointments, on average.

When you go somewhere together, however, you will be late, on average. Why? If we model being early or late for each of you by flipping a coin (heads is early, tails is late), then the only way you will not be late as a couple, is if neither of you is late. This is like flipping two heads in a row, or one chance in four. Now expand this to a big industrial project with thousands of tasks, and you can imagine the implications.

We don’t have to imagine the implications – we live with them every day. More to come (soon, I hope), on the topics of innumeracy and strategic risk management.

17 July 2009 • 7:00 am

Scorecard Blues (plus Three Other Colors)


To my chagrin, the term ‘scorecard’ is widely used in both the disciplines of performance management and strategy execution, and without further qualification, has an imprecise variety of meanings. To some, it may mean a large collection of indicators of operational performance. To others, it is an ambiguous shorthand for ‘balanced scorecard,’ which is a well-developed set of related ideas and practices around strategy management and execution. Ambiguity comes from the fact that to some, the term ‘balanced scorecard’ means simply a collection of measures that has been balanced according to some real or imagine scheme. On far too many occasions, I’ve been approached by a conference attendee with a request to review and comment on his so-called ‘balanced scorecard,’ only to find that the proud offering is a only collection of operational measures with no connection to strategy. This is the basis of my Scorecard Blues. So let me be blunt: if a set of measures has been selected without the prior development of a strategy map, it cannot be properly called a balanced scorecard.

Even without the qualifier of ‘balanced,’ a ‘scorecard’ is seen as a group of measures, and / or the visual representation of those measures, and / or the tool for managing measurement data. Many software tools called ‘scorecards’ have been developed to facilitate the collection, analysis, and presentation of scorecards, both for operational and strategic use. Because the term ‘scorecard’ has so many diverse meanings and uses, it simply cannot be used alone without further explanation. But there is one trait that attaches to nearly every individual’s own definition of the term ‘scorecard.’

The lowest common denominator of nearly all ‘scorecards’ is the ubiquitous red – yellow (amber in Europe) – green summary indicator scheme (hence RYG). more

30 June 2009 • 7:00 am

How Was Your Flight? A Journey From Concept to Indicator

In our pursuit of a shared vocabulary of measurement, we’ve already considered the ideas of accuracy, precision, and healthy skepticism. Here, we take a step back and look at key terms of measurement concept, dimension, and indicator.

A measurement concept is a mental image that describes an area of interest, such as speed, warmth, or comfort. The key to thinking about a concept is that it springs from an idea; an impression or perception that cannot be directly measured. To conceptualize an idea is to specify what we mean when we use that mental image.


29 June 2009 • 6:30 am

Healthy Skepticism, Precision, and Measurement Accuracy

Much has already been written here about the process of capturing the change agenda and developing strategy maps. These important tools are valuable for communicating strategy across the organization. But they also serve as the foundation for identifying the performance measures that will motivate the behavior changes needed for strategy execution. And without a healthy skepticism, measures can mislead as much as they inform. Many remember that Mark Twain wrote,

“Figures often beguile me, particularly when I have the arranging of them myself; in which case the remark attributed to Disraeli would often apply with justice and force: ‘There are three kinds of lies: lies, damned lies, and statistics.‘”


22 June 2009 • 7:11 am

Cascading Conundrums – Part III

In Parts I and II of this topic, I asserted that cascading a balanced scorecard (BSC) across an organization is a process that requires careful planning, and thoughtful answers to the ‘When’, ‘Why’, and ‘Where’ questions. I cautioned that hastily planned cascading can derail the entire change program. Here, we conclude with the final three questions a leadership team should consider before cascading strategy across the organization.

19 June 2009 • 12:29 pm

Cascading Conundrums – Part II

In Part I of this topic, I asserted that cascading a balanced scorecard (BSC) across an organization is a process that requires careful planning, and thoughtful answers to the ‘When’ and ‘Why’ questions. I cautioned that hastily planned cascading can derail the entire change program. Here, we continue with the next question every leadership team should consider before cascading strategy across the organization.


18 June 2009 • 12:24 pm

Cascading Conundrums – Part I

Cascading is a term that has been used in the balanced scorecard (BSC) community to describe the process of propagating the BSC across an organization. Although the term implies a downward movement (through the organization’s hierarchy), propagation in any direction has come to be referred to as ‘cascading.’ Some people mistakenly apply the term to the strategy communication process; after all, they reason, communication of strategy also cascades through the organization, and is certainly related to BSC propagation. But I believe that cascading and communication are two separate processes, especially since communication is absolutely essential to the change process, while cascading is not always necessary or beneficial. And poorly-planned cascading can derail the change program.


8 June 2009 • 11:36 am

The Motivating Power of Measurement

Practitioners and fans of the balanced scorecard concept understand that measurement has the power to motivate behavior. The great challenge in driving change in any organization isn’t just to change the culture, but to change the behavior of individuals and groups inside the organization. Performance measurement doesn’t just tell us how well we’re doing at achieving a desired outcome, the very process of measurement and communication of measure results actually changes behavior.