Temper your Optimisations with Slack

One of the ways I see many fall into error is by misunderstanding the nature and value of optimisation – by which, at best, they typically mean to make something somewhat better, which should be called amelioration; to do actual optimisation you need to have a well-defined notion of as good as possible (i.e. optimal, not just better). There are several problems that are relevant here. Among the most prominent and common are:

For a fairly good rough outline of these problems, and a good call to usurp the tyrany of misguided optimisation, see Against Optimization by Brian Klaas. For a kindred take on the gamification of everyday life, see The Score by Professor C Thi Nguyen.

For an example of sticking with a goal after you've improved in regard to it enough that other things matter, imagine someone who lives near the equator and finds the heat unbearable, so decides they need to go somewhere cooler. They might mistake it for logical to therefore seek out the coldest place on Earth, somewhere near one of the poles. After all, that's as far from the problem as it's possible to get. I hope the reader can see how that might be overdoing things; merely moving to temperate latitudes will fix the surfeit of heat, without replacing it with other problems. A more logical approach would be to move in the direction of cooler climates until they find other issues intrude more than excessive heat. I trust the reader can see how this also leads to the discovery that optimising for just one thing is typically a mistake; once you get away from the domain in which just one thing dominates your problems, you get into a domain where there are several competing concerns, and a compromise among them is necessary in order to improve your situation. Which is the key to my last point, above: that there are usually many things that matter, and no single measure (not even some carefully chosen function of the many) is ever adequate to capture the full story of what is really best – if, indeed, anything is ever really best. Sometimes pretty good will do, after all.

None the less, the biggest source of failures of optimisation that I have seen is the error of measuring the wrong thing, which I thus put first in my list. Commonly this arises from finding something tolerably easy to measure, that seems tolerably well correlated with good outcomes, and piling in on trying to push that forward. However – as has oft been remarked – correlation is not causation, so the fact that something it was possible to measure – before you started favouring activities that pushed its measure in some specific direction – and that was (back then) well correlated with outcomes you liked may, all too easily, cease being correlated with those good outcomes (or even become harder to measure) once you do start trying to push it in a specific direction.

There are some classic examples of this. Management notices that the software teams whose products please customers best are also those that fix more bugs than others. So managements decides to reward closing bug reports. But then teams subtly sneak (easy-to-fix) bugs into the system so that, when they get discovered, they can fix them and thereby get a bonus. (They might not consciously do this: they might just be so busy trying to please customers with fixes to earlier bugs, or additions of new features, that they don't try particularly hard to find bugs before they reach the customer; and then they fix the bugs once the customers report them. Since that pays them better than catching the bugs before the customer noticed them, it's what their incentive system encourages them to do, whether they notice or not.) That leads to the high bug-fix rate metric losing its correlation with good quality software production.


Valid CSSValid HTML 5 Written by Eddy.