In pretty much any industry, whether a product is
good enough is
crucial to its success in its market. Of course, the meaning of
enough tends to have some flexibility: consumers who haven't been exposed to
anything better are apt to accept as
good enough a product that a
consumer used to better goods would simply reject; and a higher priced product
needs to be sufficiently better than its cheaper competitor to merely be good
enough to justify costing more. Success in the market thus depends on balancing
the costs of making a better (aside from price) product against the benefits of
gaining more customers. Improvements which don't affect customer buying choices
are not worth investing in, while even a large saving in production costs is not
worth making if it customers won't buy the product that results.
Ideally, a business would run itself in such a way that it always delivers good enough while spending (not more than) only as much as this needs. However, if what a business delivers isn't always good enough, those customers for whom it isn't good enough are more likely to turn – and advise others to turn – to competitors in future, quite apart from any refund they may compel the business to pay them. Each insufficiently good delivery harms the business's reputation; even after the business has corrected its error, so that subsequent deliveries are better, it continues to suffer harm from a poor reputation earned before the correction. If a business knows for certain that what it delivers always is good enough, it can make do with simply doing its job: but, if it has any cause to be unsure, the added cost of testing that it is doing well enough (and recording anything unexpected) will pay for itself in the long run, by saving the harm that can flow from a bad reputation.
Testing can reveal how good the product is: this is quality measurement. It
is, however, not sufficient to know how good or bad one's products are: the
activities needed to actually ensure
good enough quality can reasonably
be described as
quality management. To actually approach the ideal of
only delivering good enough, one must not only test but:
Each of these processes can yield insights that can improve how a business does its job and what else may be worth testing.
What I here term
quality measurement and
management correspond roughly to what are elsewhere termed
quality assurance (respectively); however, I have never
been able to remember which of the latter terms means which activity and the
available sources on the internet were not helpful about clearing up the
ambiguity (my father, fortunately, is more reliable). So it seems, to me,
constructive to introduce new names, whose meanings are hopefully
clearer. There are, in any case, subtle differences between what orthodoxy
means by its terms and the usages I'm introducing.
A business which hasn't been managing quality is apt to be ignorant even of
what errors it makes and of some details of its production and delivery
processes. Until it learns those things, its quality management effectively
consists of hoping that customers are happy and enduring the unseen and
unmanageable costs of harm to reputation from its errors. These unseen costs
might be less than the visible costs of embarking on a quality management
program: but they can all too easily be greater – and, if they are less, a
quality management program is the only means to discover it. You can always
scale back a quality management process if you discover you don't need it:
whereas, if you do need it, but don't detect this, you may be doomed to go out
of business before you can do anything about it. If there are errors being made
that previously went un-noticed, the costs of quality management are apt to
initially increase as you learn (by experiment, much of it wrong) which things
you need to change about your processes and which things you really need to be
measuring. Once those lessons are learned, competent quality management can
help a business adjust the way it does things to enable itself to more reliably
good enough: and the cost of managing quality actually goes down,
once it's learned the basic lessons and adopted sound basic processes.
Of course, each customer is effectively testing each product purchased. If a business makes sure customers endure negligible inconvenience (let alone harm) in reporting any problems they find and obtaining a good enough replacement, this can be sufficient testing. However, a business cannot always rely on customers to report problems directly to them, so it is usually necessary to allocate some resources to doing some testing of it own. The more expensive testing is, the less of it one can afford for any given expenditure of resources; and, naturally, the amount of resources worth investing in testing needs to be proportionate to the risk of errors and the costs likely to flow from them; but you have to do some testing to discover what errors actually happen, or you won't be able to realistically assess the damage they may cause.
For some products, a fair amount of the testing that can be worth doing can be done to every delivery; but some kinds of tests render the tested product undeliverable. If a bottle of wine is defective, you won't know about it until it is opened; but the customer can only trust that the bottle contains what it says it does if it arrives at the table unopened, so the only way for a restaurant to test the wine it sells is to let the customer do the testing. Of course, if the restaurant has a large turn-over of some make of wine, it can do some basic testing of its own, selecting one bottle in sixty (say) and having the staff test it (e.g. by way of a treat at the end of a day's work); this way, they have some chance of catching a deficiency not reported by customers
When testing does discover a deficiency, the business must decide what to do about it. In a bakery, one might find that the few defective loaves are plainly visible as such before they reach the customer; in such a case, the baker can discard these loaves and be safe from the ill repute that would flow from selling defective loaves. Of course, that still involves the cost of some wasted ingredients; if it happens at all often, it'll be worth looking into what's actually causing the defects. Once you know the cause of your quality issues, you have some chance of working out a strategy for solving them – and for assessing whether the costs of such a strategy are affordable, given the benefits the strategy promises.
The five project phases:
The biggest error businesses make with quality management is to hire in a
bunch of consultants to write a set of processes (hint: the processes they
deliver shall sound better to management than to those who actually have to put
them into practice), then make staff career prospects conditional on rigid
adherence to these processes, coupled to bureaucratic milestones for arbitrarily
set improvements in numbers extracted from the quality measurement
system. (Those numbers shall rapidly cease meaning what the used to, as a
result.) The consultants might even be able to get the business an ISO
9000-family certification for such processes: but it's not the road to good
quality management or a happy and productive workforce. Instead it leads to
cynical manipulation of quality metrics and disrespect for the process: staff
shall (often correctly) blame their problems on the processes, which they'll
only follow in so far as violating them might be noticed – they'll pay lip
service to the rules, or follow the letter rather than the spirit, when
management is around. In such an environment, the staff who actually enjoy
being productive (hint: these are the ones you want to keep) get frustrated
all that red tape and leave;
while jobsworths shall
prosper and fill the organization, until it achieves nothing of value.
Some of the more famous ways to get quality management wrong revolve around
naïve attempts to create the right
incentives. As Dilbert has
pointed out, paying computer programmers by the number of bugs they fix is
tantamount to paying them to write bugs (so that they can then get paid for
fixing them). Another statistic much loved by small-minded middle managers is
the number of lines of code the programmer writes (or changes, if they're awake
enough to have noticed that some of what they employ programmers for isn't to
write new code, but to fix the sprawling mess of what's already there): tying
salary increases and promotions to any such simplistic statistic gives staff an
incentive to pursue patterns of work that get higher scores on the statistic,
regardless of how (un)productive those patterns of work may be. Managers who
are not competent to review the work their staff do, and assess whether it is
good or otherwise, are not qualified to decide how those staff should be
Rigid rules are another excellent way to wreck a quality management process: if those doing a job need to deviate from the rules, or think they do, there must be a way that they can realistically address that (apparent) need. Otherwise, they'll break the rules anyway and simply not tell you about it: and, while a deviation from your processes may be bad, it is far worse to not know that it happened or, if you ever find out that it happened, not know why it happened. Your processes should include a process for deviating from your processes ! That process should include documenting what problem made the deviation necessary – you need to look into how that problem arose and what you can do to avoid similar problems in future. Your process should document how, exactly, you deviated from your normal process: it may, in fact, be necessary to adjust your rules so that this would no longer be a deviation. If you don't know what deviations are happening, you can't make informed decisions about how to change the rules and you can't discover what other rules may be needed to avoid problems in future. Your process for deviation should also involve getting others, particularly more experienced staff, to look at the problem and see what else could be done about it.
On a related note, re-education is better than punishment. If something goes wrong and, in the course of dealing with it, you discover that someone has been deviating from the proper process, listen carefully to why they did what they did and get their more experienced peers to teach them how to do their job better in future. Punishing them shall just teach them to look for ways to avoid the blame in future – and, if punishing is the standard response, this greatly increases the chances that you're blaming the wrong person. Treat your staff with respect: the culprit probably had reasons for deviating and when you understand those, they may reveal other problems you need to look into.
Note, also, that managing your processes also means managing your management processes; at the most self-referential extreme, be aware of the costs of your quality management processes – try to ensure they don't intrude excessively in the actual day-to-day business of getting the job done. You also need to manage how management interferes in the process: while those who have to do the job need a process by which to deviate from process, this should not be treated as an excuse for drive-by managers to disrupt someone's work by diverting them from what your processes say they should be doing, onto some pet task of the drive-by manager. Make sure chains of authority are clear and clearly followed: even the most junior staff should be able to tell even the mightiest of managers to go through the proper channels if they need a job done. You need to be realistic about the work-loads you impose on staff: if they are being diverted from what they're supposed to be working on, things are going to slip; at the very least, any such diversion must be recorded and reported correctly and the diverted staff should not be held to account for any resulting delay in their primary work – lay that blame on the drive-by manager. To make that work, you also need a process by which (even the most junior) staff can record that they have been asked to deviate from process (when, by whom, etc.) and declined to do so, so that any attempt at retaliatory action by someone in power can be thwarted by exposing their motives: those in the chain of command must stand up, for those under them, to those above them. If your quality management process doesn't extend to keeping managers from abusing authority, it is in vain.
Remember, also, the difference between industry best practice and industry common practice. No rule of nature says that they coincide: market forces, in the presence of healthy competition, should tend to push the latter towards the former, but the gulf can be shockingly large. The fact that all your competitors do something a certain way does not mean that's the way to do it – although, if those of your competitors who're gaining market share fastest are doing things a particular way, that's a fairly good hint that it's worth looking into. Your aim is to do things right, not be fashionable; what you want is good processes that help you to succeed – any fahionable buzz-word compliance that happens to get you is incidental.
The general idea of quality management is to change the process by which the job gets done in ways that improve the end results. Often there are obvious ways to do that: however, sometimes changes seem like they should obviously lead to improvement, yet fail to do so, or even achieve the opposite. Without systematic testing, recording of test results and changes to process, you can't know whether a particular change has actually led to improvement, let alone whether the resulting improvements are actually worth any extra costs that came out of the change. So quality measurement is a prerequisite of quality management, for all that there's a lot more to it.
Unfortunately, it isn't always easy to measure all aspects of quality; and the things one can measure, even when they have some relationship with quality, aren't necessarily simply correlated with it. So it is important to remember that what you are measuring isn't quality: no matter what you measure, it is no substitute for actually enquiring into whether what you deliver is good enough. That this is subjective and not concretely quantifiable does limit its usefulness when it comes to determining whether some change in process has lead to an improvement: but only by bearing such subjective perceptions in mind (and listening to the subjective perceptions of staff and customers) can one notice when the numbers coming out of the quality measurement system aren't telling the full story. When discrepancies show up, look for what else you can measure, that might make your numbers serve as a better indicator of quality.
For example, imagine a software business with a simple bug-tracking system; after the developers fixed several bugs, the number of bugs in the system went up ! That might be due to the developers breaking other things in the course of fixing the bugs, but it might equally be because: those bugs were preventing users from doing more with the product, hence preventing them from discovering other bugs; or the other bugs might be less severe so did not seem worth reporting until the more severe bugs were fixed; or fixing those first bugs made the product good enough that it simply got more users, hence more bug reports. Getting developers to report how long the error in code had been present before the bug came to light might reveal one (and it's useful in its own way, if only to help you estimate how many undiscovered bugs you likely have); assessing the severity of bugs might reveal another (and shall help you to prioritize work on fixing bugs in future); and the last can be investigated by studying how your bug-count varies with your (available estimates of) how many users the program has.
The end result of all that measurement is a bunch of statistics,
systematically measured and recorded over time. Contrary to the popular
you can prove anything with statistics, the data you collect
prove nothing: yet they contain the clues to what you can do to make your
business more effective at meeting the needs of its target market. Do not leap
too readilly to conclusions – most especially, be wary of reading the
proving some pet theory you already suspected of being true
– but look for the possibilities that may lie behind the data. Carefully
investigate what you can measure that can distinguish those
possibilities; and, for each possibility, consider what you can do (both
individually and institutionally) that would, if that possibility is correct,
lead to improvements. Try out the ones that look reasonable and won't lead to
harm if the given possibility is wrong and see what effect they actually
have. The common abuse of statistics as a tool for deceiving people stems not
from any property of statistics but from the fact that people are too willing to
jump to conclusions and, for the most part, insufficiently educated about how to
intepret statistics. Used with care, and with a clear understanding of the
limitations on what can actually be inferred from them, statistics are a
powerful tool for studying a system, including a business.
… it seemed that every time we were beginning to form up into teams we would be reorganized. … I was to learn later in life that, perhaps because we are so good at organizing, we tend as a nation to meet any new situation by reorganizing; and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, and demoralization.Charlton Ogburn, 1957
So your managers have heard your sales guys telling your customers your product is great; and your managers have echoed this propaganda to your staff, in those little motivational speeches managers like to give, so your staff (especially the ones who get to deal with dissatisfied customers) suspect that management won't like being told the harsh truth and – well, they saw the mauled carcass of the last messenger, so they didn't make that mistake; but they haven't told you that, either – so when you finally get the numbers from your new quality measurement reports you're a bit surprised to find things aren't as rosy as you'd heard. What're you going to do about it ?
Leave the organizational chart alone. Don't turn to the poisonous panacea of delegating everything to outside consultants. You aren't going to fix the problem without trusting your own staff and being seen to trust them; you might find some advice from an ISO 9000 consultant helpful, but if they're any good the first thing they'll tell you is that you have to work this out with your own staff; they can help you do that, but they can't do it for you. If your managers or consultants think they can write processes without consulting the folk who do the actual work, they're wrong.
The first thing you need is a description, by your staff, of how they do their job. Your ultimate goal is to establish best practice but your first task is to discover what common practice is – and how wide the gulf is between best and common. Make sure that your staff understand that what you want is knowledge of how things really work; this may be easier for them if they can supply the information anonymously. Get your more skilled and experienced staff to go over how the job is being done and to come up with recommendations for how to do things in the immediate future. If they can also suggest best practices to aspire to in the long term, that's great too: but what you need immediately is guidance that all your staff (even the lowliest new trainee) can start following tomorrow. Do that across all work areas in your business: those who make the product, those who persuade people to buy it, those who help customers cope with its failings, those who hire and manage all of the foregoing. Now you've got the raw material for a set of documents describing how your organization should be working.
Don't think of that document as Rules: think of it as Advice. Describe it as such to your staff and make sure they understand that it's going to evolve and the principal driving force behind its evolution is going to be their input. Do your best to motivate them to follow the advice. Make sure that they have a channel via which they can communicate any problems that prevent them from following the advice or that arise from following the advice; make sure that they know that you want that feed-back, shall take it seriously and can cope with harsh truths. Schedule some time, every few months (frequently to begin with, less frequently as things stabilise), for your most sensible staff to review the advice and update it to take account of the feed-back. Those staff need to be responsible for, and know that they have ultimate authority over, the documents that say how they, and the colleagues whose work they understand better than their managers do, do that work. The experts who own the documents for each work area need to also review the matching documents for other work areas and look out for anything that's apt to cause conflicts; they'll also doubtless get feed-back from their peers about problems caused by such conflicts – which you'll need relevant groups to work out together, so that your organization's left hand not only does know what the right hand is doing, but they actually co-ordinate their actions to work towards a common goal. If the sales team's process is to always promises whatever they think the potential customer most wants, your production teams are apt to want them to revise that process !
Once you've got to the point where your most competent staff and the managers closest to the work are satisfied that all staff know the advice and that following it closely won't adversely affect getting the job done, start introducing the first two rules: every deviation should be discussed with relevant colleagues, particularly those who may have to clean up the mess if it goes wrong; and every deviation should be documented. The documentation should cover why the deviation is being made, exactly how you'll be deviating from the (current) advice and why you're confident this is sensible (or, at least, more so than the alternatives). This documentation should be systematically collected, saved for future reference and periodically reviewed: it is the raw material that's going to guide the evolution of your processes.
After a few years, your staff shall be treating the advice as rules: not because it is set in stone that they must, not because someone's threatening them with disciplinary action if they don't, not because they'll be paid more if they do – indeed, make sure none of those things apply, or they'll get in the way of getting your staff on board – but because following the process shall be the quickest and least stressful way to get their job done. It'll also be safer, cheaper and more productive than the way you used to do it, it'll produce better results and your staff shall like their jobs better (as long as you paid attention to what your HR department and managers learned from applying the process to their activities).
This path can lead to an ISO 9000-family certification, or some similar independently audited seal of approval of your quality processes: that may even make it easier to win some customers, but it should not be your primary goal. What matters is making your business better at doing its job.