"What gets measured gets done."
During the first years of my brief career as a performance improvement technologist that particular phrase was drilled deeply into my head. If an individual, shop, business or organization wanted to know it was successful at doing their prime mission it was important to find a way to tie nice, firm numbers to what they did. In the case of a school it could be how long it took to take a learner from the beginning of their learning experience to the point when they were ready or able to go out into the world; the number of tasks a learner could accomplish at the end of their schooling compared to the beginning, the increase in accuracy, perhaps even the cost of the schooling. Decreased time or cost; increased skill or accuracy, naturally, are considered good results.
Measures which have deep meaning to the individual, the business or the organization - especially when an improved measure benefits their cause - lend themselves to devices or plans to gauge accurately what is going on. I worked on a project with an organization that measured performance of a particular task by the number of times the task was performed during a particular period of time. More, as you can suspect, does not always lead to better. You can either end up with too much of a good thing or too much of a less-than-good thing, neither of which are a good thing.
I encounter a good example of a wrong unit of measurement quite often. If I had a dollar for every time I heard a race participant complain that a course was “thirty seconds too long” I could buy a few nice things. Lesson number one: With perhaps the possible exception of light-years, units of time are not the most-valid measure of linear distance. A tool which was never meant for the purpose the operator has decided to employ it will frustrate the performance improvement consultant.
I now work in the instructional systems design world; we use a model which asks us to first plan for training, analyze what needs to be trained, design and develop the instruction. Lastly, we implement and evaluate the training’s end result. The planning document we use tells - like most newspaper articles - the who, what, where, when, why and how…as well as the “how much” and “how many” questions. The head of my organization, however, wants to use the planning document for a time frame which goes way beyond the lifespan of the planning period. Why do well-intentioned measures of valued qualities persist in being measured with the wrong tool? The psychologist Abraham Maslow said it best, “if the only tool you have is a hammer every problem is a nail.”
Runners, cyclists, swimmers and multisport athletes, not to mention fitness enthusiasts and folks like me who work to drop a few pounds have at our disposal a myriad of on-line and off-line tools to track nearly every unit of measure which means something to us. I play with several of them on a regular basis so I can answer the important “which one is best for my need” question. Like every other piece of technology I am frequently torn between frustration and fidelity when “1.0 transforms into 2.0,” and so on. It’s bad enough to have to re-think how to measure what’s important when hardware dies on you and you move up the technological food chain. But when the maker of your device does a complete scrub of your on-line measurement software, your computer’s software, and the partnerships to which you’ve become accustomed…well, let’s just say I’m about to stop trying to use my hammer as a monkey wrench.
Keep your measures of performance simple and you're less likely to be frustrated.