A "six sigma quality" process is said to exhibit process capability ratios Cp of 2.0 or greater and a Ppk of 1.50 or greater. As I travel around the world delivering Lean Six Sigma and Quality training and consulting, I have observed several instances where an organization proudly reports Cpk and Ppk levels greater than 2.0 - only to learn that is still experiencing frequent customer complaints and elevated DPPM (defective parts per million) counts. My initial reaction is that of suspicion of how the capability index was calculated, followed closely by the question of what the organization plans to do about it.
If your process capability is truly approaching 2.0 or greater, and you are not experiencing customer complaints it tells me that you are quite possibly leaving money on the table with respect to optimizing your operational efficiency. Certainly, there are industries, products and services, that require 6 Sigma quality performance or greater, but they are the exception rather than the rule. The goal of Lean Six Sigma should never be about chasing higher and higher capability indices, rather to optimize customer value and secure competitive advantage. Capability indices are a contrived metric to give management a snapshot view of our overall process performance. A myopic focus on driving process capability improvement for the sake of increasing the process' Ppk value does not necessarily optimize value for the customer. Often, such a practice leads to over-engineered products - needlessly adding cost without achieving higher sales or greater market share. If you are already better than the competition, and the customer cannot perceive greater value - and is not willing to pay more or buy more - allocate your resources to other more important improvement opportunities. Improving quality for quality's sake is a major reason why TQM (Total Quality Management) efforts failed so miserably in the 1980s.
My general rule of thumb recommendation is to target for Ppk = 1.33. I also remind organizations that in practice there are often conflicting and competing customer requirements. It may not be possible or desirable to drive every customer CTQ (critical to quality) characteristic to Ppk >= 1.33 due to trade-offs in product design. The criticality of the defect should also be considered when setting an improvement goal. For example, processes with defects potentially causing serious injury - or worse - should demonstrate much greater short-term and long-term capability than less critical defects.
Far too often; however, reported Ppk values of 2.0 or greater are not representative of the true process performance. Organizations continue to experience elevated complaint levels and/or high DPPM values, raising serious questions as to how the capability indices are calculated. How might this happen? I have summarized several scenarios below:
Cpk, Ppk values are calculated only on sorted, shipped product -
This incorrect calculation of Ppk is often justified because "it represents the quality level of product going to the customer". WRONG. Inspection and sorting is never 100% accurate. One is led into false security believing they have sorted out all bad product - especially where the process is not stable, you cannot 100% inspect, and in the case where test methods are destructive in nature. (True test error cannot be determined). Capability indices must be calculated on all output of the process - good and bad, scrapped and shipped.
Cpk, Ppk values are estimated on subgroup averages -
Capability indices should almost always be calculated from individuals data. Your customer experiences variation between individual units, not averages. Variability of subgroup averages will always be smaller than the variation among individuals, resulting in a smaller standard deviation, thereby inflating your process capability metric. For more information refer to the statistical concept of the Central Limit Theorem.
Cpk, Ppk are estimated on an improperly selected sample -
Related to the first two points is the issue of proper sample selection. How assured are you that your testing frequency and sample selection is truly representative of the product reaching the customer? When was the last time you verified that current sampling plans represent all of the variation in the "lot"? Is your test sampling plan based on tribal knowledge (this is the way we have always done it) or is your sampling based on statistically-validated Components of Variance studies? Have such studies been performed following process or product modifications (Management of Change)?
Your specifications are very wide -
An unusually large spread between your specification limits can also result in a large Ppk, Cpk value. The issue here is whether your customers can accept this amount of potential variability in your product performance. Unless your organizational culture is committed to a "Run to Target" mindset and you have honest to goodness working (i.e. effective) process controls, wide specification limits only provide temptation to release product that deviates from the norm as long as it is in spec. Capability Indices are point-estimates; they vary over time depending on the sample collected, and they do not drive daily production decisions. If the customer desperately needs the product it is only human nature to find a way to ship "suspect" product - retest, slit and salvage, etc. How often have you heard the phrase or something similar: "When in doubt ship it out"? Generally the outcome is if the customer wants it bad, they will get it... bad.
Your test methods do not predict fitness for use -
If your capability indices are large but you continue to receive frequent complaints and experience high DPPM, it might also be the result of inadequate test methods. Customer requirements are constantly changing. When was the last time you validated your customer requirements? This issue of validating and re-validating Voice of Customer influences how specification limits are set (see point above), but also helps confirm whether your manufacturing-friendly tests are adequate and relevant. Test Methods MSA studies (e.g. Gage R&R) to evaluate and improve test method capability are definitely important towards reducing overall process variability, but very repeatable and reproducible tests are meaningless if the test method itself does not predict fitness for use. Large Cpk, Ppk of a process as measured by an irrelevant test method cannot assure customer satisfaction.
Your process is not stable; a trend exists -
In cases where short-term variation is much smaller than long-term variation, the Cp metric can be unusually large in comparison to the Ppk. For example, the variability between successive measurements (individuals) is relatively small, but the overall process is trending upward or downward.
These six examples highlight the more common reasons for disconnects between internal measures of quality and customer perceptions of quality. These distinctions should be monitored and tracked over time in a balanced scorecard.
No comments:
Post a Comment