It has been reported that the National Commission on Fiscal Responsibility and Reform (NCFRR) has suggested the elimination of the Baldrige Performance Excellence Program as a cost-cutting move to reduce the US national debt. An illustrative example from NCFRR states that the Baldrige Award Program - along with support of the Hollings Manufacturing Extension Partnership costs the US taxpayer approximately $120 Million. Compare this figure to the $16 Billion spent on ear marks or the $20 Billion wasted to purchase military hardware that the US military does not even want.
The Malcolm Baldrige National Quality Program was established in 1987 as a means to recognize performance excellence of public and private U.S. organizations, thereby promoting U.S. competitiveness. A network of state, regional, and local Baldrige-based award programs provide potential award applicants and examiners, promotes the use of the Criteria, and disseminates information regarding the Award process. The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce, manages the Baldrige National Quality Program; and the American Society for Quality (ASQ) assists in administering the Award Program Many enterprises around the globe now follow the Baldrige Criteria.
I am a 29 year Quality veteran, an ISO 9001 Lead Auditor, and a Baldrige Examiner for the State of Minnesota. I consider myself socially liberal but fiscally conservative, generally favoring a smaller central government. Without getting into a deep philosophical discussion of my personal political views, I believe a critical role of the U.S. Government is to protect the Republic, uphold the Constitution and Bill of Rights, support human rights and protect civil liberties. Consider the federal Depts of Transportation and Commerce: just as a modern, efficient transportation system is critical to the flow of goods and services for economic growth and national security, the Baldrige Criteria are critical to assure the long-term viability of organizations, thereby protecting the competitiveness of the country in an increasingly global economy.
How ironic then, that the NCFRR has identified the Baldrige National Quality Program as wasteful, when it is the Baldrige Criteria that offer a long-term solution to waste reduction and performance improvement. Rather than cutting costs by freezing wages, eliminating jobs and reducing services, we need government to focus on eliminating waste and non value-added activities to improve its productivity, cost effectiveness and operational excellence. It's all about leadership, strategic planning, taxpayer and constituent focus, measurement and analysis, employee engagement, process management, and results. Sound familiar?
Quality is not an expense, it is an investment.
Wednesday, December 08, 2010
Friday, November 05, 2010
Raising the Voice of Quality
In his October 26, 2010 post to ASQ’s new blog “A View from the Q”, Executive Director Paul Borawski announced that ASQ was embarking on ASQ 2015 - an initiative to evolve ASQ’s role in the world to “Raise the Voice of Quality.” Paul notes that in November 2010 ASQ joins the world’s quality organizations in observing World Quality Month, stating, “We join the world in its efforts to bring attention to the impact quality is having in every corner of the globe. Better quality in products and services, better healthcare, better education, better government, better nonprofit organizations, better communities–individually and collectively making the world a better place”.
Paul Borawski asks what it would take to get the world’s attention to focus on quality; to have the world realize the full potential of quality?
I can’t describe it but I’ll know it when I see/feel it...
The first challenge is to have a common language around Quality. Joseph M. Juran’s Quality Control Handbook is the standard reference work on quality control and established Juran as an authority on quality. In his book Juran defines quality as fitness for use described by:
• features that meet customer needs, and
• freedom from deficiencies (errors, waste, defects, etc.).
Juran is widely credited for adding the human dimension to quality management. He pushed for the education and training of managers. He also developed the "Juran's trilogy," an approach composed of three managerial processes: quality planning, quality control and quality improvement
W. Edward Deming's conceptualization of quality suggests that quality must meet both explicit and latent needs. Deming believed that quality should be the underlying philosophy of a business rather than simply a component of its strategic plan.
Philip Crosby , in his 1979 book titled Quality is Free, defined quality in terms of Conformance to Requirements. My difficulty with this limited definition is in the following key areas:
1. It reinforces a goal-post mentality and behavior where everything inside the specification limits is treated as equally good
2. It assumes that specifications were soundly established in the first place, and continuously validated over time to keep pace with changing customer needs
3. It assumes the test methods for which the specifications were originally developed are relevant to customer use. Furthermore, are the test methods robust to uncontrollable noise and other effects? Are they stable and capable?
4. It assumes the sample being tested is representative of the lot. Has the product presented for inspection been sampled properly?
David A. Garvin ("Competing on the Eight Dimensions of Quality", Harvard Business Review, November-December 1987) proposes eight critical dimensions or categories of [product] quality that serve as a framework for strategic analysis: Performance, features, reliability, conformance, durability, serviceability, aesthetics, and perceived quality.
Genichi Taguchi defines the lack of quality as a loss to society. The societal-loss perspective suggests that “quality is the loss a product causes to society after being shipped, other than losses causes by its intrinsic functions". Losses to society result from either off-target performance, variability in performance or harmful side effects. Losses due to harmful side effects are referred to by economists as "negative externalities" or external diseconomies of production or consumption. Diseconomies of production occur when a producer's actions result in an uncompensated loss to others. This societal loss perspective of quality is the foundation of today’s “Green” and “Sustainability” initiatives.
Have a strong customer focus to improve key value streams
Regardless of how the producer/provider might define quality, the customer is the ultimate judge of quality. A “quality” product, service or transaction must deliver value to the customer – as perceived by the customer. It has been my experience that the single most impactful driver of quality is a strong customer focus (followed closely by process and system improvement, and total involvement). Customer focus is the greatest enabler of employee engagement – and employee engagement is critical to the long-term success of an enterprise. The notion of “Loss to Society” is a powerful driver to the continuous improvement of systems, processes, products and services because each of us wants to leave a positive legacy for our children and grandchildren.
Paul Borawski has said that “vision represents the end state, strategy represents the starting point...” (Quality Progress, June 2007). I further suggest that the organization’s values and principles establish the boundaries of acceptable norms and behaviors among its people as they deploy and execute the strategic, operational and tactical plans towards accomplishing its mission.
Customer focus must be internalized and structured as a key organizational and personal value. Per the Baldrige Criteria for Performance Excellence, Leaders must
• identify and innovate product offerings to meet the requirements and exceed the expectations of customer groups and market segments, and
• create an organizational culture that ensures a consistently positive customer experience and contributes to customer engagement and workforce performance.
“Quality at the Source”- the practice of shifting responsibility and ownership for quality to the production operator – recognizes that quality cannot be inspected into the product; rather, quality is designed and built into the product at each step of the value stream. Quality by Design and Quality at the Source are good first starts but customer focus requires more. All of the various customer touchpoints in an organization – marketing, field sales, customer service, technical service, quality engineering, complaint analysis, etc., must be aligned and coordinated into a cohesive organizational strategy.
Leaders must talk the talk to clearly and consistently communicate their expectations, and walk the walk to personally model the desired behaviors. Every individual must take personal accountability to “own” a customer issue when it presents itself, and personally see the issue to its successful resolution. Voice of Customer (VOC) should not only be an integral component to new product development, but customer feedback and VOC validation must be integrated into the Management of Change process. Customer requirements are constantly changing. That what was once an exciting feature eventually becomes an expected or even basic need (see Kano Model).
Apply statistical thinking everywhere...
Quality performance beyond conformance to requirements is often seen as a cost rather than an investment. A focus on the customer (and an eye on the competition) will help assure the organization will not be satisfied with the status quo, but will promote organizational learning and continuous improvement. The ASQ Statistics Division has been promoting statistical thinking for 30 years as a philosophy of learning and action to understand and manage variation for performance excellence. Statistical thinking can be applied to improve strategic, managerial and operational processes everywhere.
Paul Borawski asks what it would take to get the world’s attention to focus on quality; to have the world realize the full potential of quality?
I can’t describe it but I’ll know it when I see/feel it...
The first challenge is to have a common language around Quality. Joseph M. Juran’s Quality Control Handbook is the standard reference work on quality control and established Juran as an authority on quality. In his book Juran defines quality as fitness for use described by:
• features that meet customer needs, and
• freedom from deficiencies (errors, waste, defects, etc.).
Juran is widely credited for adding the human dimension to quality management. He pushed for the education and training of managers. He also developed the "Juran's trilogy," an approach composed of three managerial processes: quality planning, quality control and quality improvement
W. Edward Deming's conceptualization of quality suggests that quality must meet both explicit and latent needs. Deming believed that quality should be the underlying philosophy of a business rather than simply a component of its strategic plan.
Philip Crosby , in his 1979 book titled Quality is Free, defined quality in terms of Conformance to Requirements. My difficulty with this limited definition is in the following key areas:
1. It reinforces a goal-post mentality and behavior where everything inside the specification limits is treated as equally good
2. It assumes that specifications were soundly established in the first place, and continuously validated over time to keep pace with changing customer needs
3. It assumes the test methods for which the specifications were originally developed are relevant to customer use. Furthermore, are the test methods robust to uncontrollable noise and other effects? Are they stable and capable?
4. It assumes the sample being tested is representative of the lot. Has the product presented for inspection been sampled properly?
David A. Garvin ("Competing on the Eight Dimensions of Quality", Harvard Business Review, November-December 1987) proposes eight critical dimensions or categories of [product] quality that serve as a framework for strategic analysis: Performance, features, reliability, conformance, durability, serviceability, aesthetics, and perceived quality.
Genichi Taguchi defines the lack of quality as a loss to society. The societal-loss perspective suggests that “quality is the loss a product causes to society after being shipped, other than losses causes by its intrinsic functions". Losses to society result from either off-target performance, variability in performance or harmful side effects. Losses due to harmful side effects are referred to by economists as "negative externalities" or external diseconomies of production or consumption. Diseconomies of production occur when a producer's actions result in an uncompensated loss to others. This societal loss perspective of quality is the foundation of today’s “Green” and “Sustainability” initiatives.
Have a strong customer focus to improve key value streams
Regardless of how the producer/provider might define quality, the customer is the ultimate judge of quality. A “quality” product, service or transaction must deliver value to the customer – as perceived by the customer. It has been my experience that the single most impactful driver of quality is a strong customer focus (followed closely by process and system improvement, and total involvement). Customer focus is the greatest enabler of employee engagement – and employee engagement is critical to the long-term success of an enterprise. The notion of “Loss to Society” is a powerful driver to the continuous improvement of systems, processes, products and services because each of us wants to leave a positive legacy for our children and grandchildren.
Paul Borawski has said that “vision represents the end state, strategy represents the starting point...” (Quality Progress, June 2007). I further suggest that the organization’s values and principles establish the boundaries of acceptable norms and behaviors among its people as they deploy and execute the strategic, operational and tactical plans towards accomplishing its mission.
Customer focus must be internalized and structured as a key organizational and personal value. Per the Baldrige Criteria for Performance Excellence, Leaders must
• identify and innovate product offerings to meet the requirements and exceed the expectations of customer groups and market segments, and
• create an organizational culture that ensures a consistently positive customer experience and contributes to customer engagement and workforce performance.
“Quality at the Source”- the practice of shifting responsibility and ownership for quality to the production operator – recognizes that quality cannot be inspected into the product; rather, quality is designed and built into the product at each step of the value stream. Quality by Design and Quality at the Source are good first starts but customer focus requires more. All of the various customer touchpoints in an organization – marketing, field sales, customer service, technical service, quality engineering, complaint analysis, etc., must be aligned and coordinated into a cohesive organizational strategy.
Leaders must talk the talk to clearly and consistently communicate their expectations, and walk the walk to personally model the desired behaviors. Every individual must take personal accountability to “own” a customer issue when it presents itself, and personally see the issue to its successful resolution. Voice of Customer (VOC) should not only be an integral component to new product development, but customer feedback and VOC validation must be integrated into the Management of Change process. Customer requirements are constantly changing. That what was once an exciting feature eventually becomes an expected or even basic need (see Kano Model).
Apply statistical thinking everywhere...
Quality performance beyond conformance to requirements is often seen as a cost rather than an investment. A focus on the customer (and an eye on the competition) will help assure the organization will not be satisfied with the status quo, but will promote organizational learning and continuous improvement. The ASQ Statistics Division has been promoting statistical thinking for 30 years as a philosophy of learning and action to understand and manage variation for performance excellence. Statistical thinking can be applied to improve strategic, managerial and operational processes everywhere.
Friday, July 02, 2010
Are Your Capability Indices Too Low for Your Process?
My prior blog post was about the incorrect calculation of Cp, Cpk, Pp, Ppk that results in unusually high capability indices given the customer perceptions of your product quality (e.g. the capability indices are > 1.0 but customer complaints for the same or equivalent property or failure mode are frequent).
This post is about the opposite problem - estimates of process capability that are too low. The most common cause (pun intended) of artificially low (small) capability indices is the selection of incorrect limits as the specification limits.
Recall: Cp = (USL-LSL)/6sigma.
"How can that be - how could we choose the wrong limits?" you ask. Simply put, your release limits for purposes of dispositioning product may not be the true customer tolerance limits. Given that nearly all of our product testing is performed on a small sample of a production lot, judgement about the quality of that lot is based on the decision limit for the sample. However, the Decision Limit (DL) of a sample is NOT the Individual Specification Limit (ISL).
Let's take an example from in-process QC testing: Say we typically perform a QC test on three "individual" samples on every 3rd output of production. We apply the "Spec Limits" of 18 - 36 units to each individual. Each individual (sample) must pass this specification (i.e. n=1, c=0 sampling plan); otherwise, the "Lot" of 3 outputs produced since the last known good test result are placed on Quality Hold.
Question: Are these the specification limits (18-36) you are using to calculate your adhesion process capability? If so, you may be under-estimating your Cp, Cpk, Pp and Ppk. Why? Because Decision Limits are not Specification Limits. The true specification limits may be wider, resulting in a larger Cp, Cpk. Fortunately, the ISL (Indivdual Specification Limit) can be calculated from knowledge of the process variation, decision limit, and 'k' factor.
This post is about the opposite problem - estimates of process capability that are too low. The most common cause (pun intended) of artificially low (small) capability indices is the selection of incorrect limits as the specification limits.
Recall: Cp = (USL-LSL)/6sigma.
"How can that be - how could we choose the wrong limits?" you ask. Simply put, your release limits for purposes of dispositioning product may not be the true customer tolerance limits. Given that nearly all of our product testing is performed on a small sample of a production lot, judgement about the quality of that lot is based on the decision limit for the sample. However, the Decision Limit (DL) of a sample is NOT the Individual Specification Limit (ISL).
Let's take an example from in-process QC testing: Say we typically perform a QC test on three "individual" samples on every 3rd output of production. We apply the "Spec Limits" of 18 - 36 units to each individual. Each individual (sample) must pass this specification (i.e. n=1, c=0 sampling plan); otherwise, the "Lot" of 3 outputs produced since the last known good test result are placed on Quality Hold.
Question: Are these the specification limits (18-36) you are using to calculate your adhesion process capability? If so, you may be under-estimating your Cp, Cpk, Pp and Ppk. Why? Because Decision Limits are not Specification Limits. The true specification limits may be wider, resulting in a larger Cp, Cpk. Fortunately, the ISL (Indivdual Specification Limit) can be calculated from knowledge of the process variation, decision limit, and 'k' factor.
Friday, April 16, 2010
Are Your Capability Indices Too Good for Your Process
A "six sigma quality" process is said to exhibit process capability ratios Cp of 2.0 or greater and a Ppk of 1.50 or greater. As I travel around the world delivering Lean Six Sigma and Quality training and consulting, I have observed several instances where an organization proudly reports Cpk and Ppk levels greater than 2.0 - only to learn that is still experiencing frequent customer complaints and elevated DPPM (defective parts per million) counts. My initial reaction is that of suspicion of how the capability index was calculated, followed closely by the question of what the organization plans to do about it.
If your process capability is truly approaching 2.0 or greater, and you are not experiencing customer complaints it tells me that you are quite possibly leaving money on the table with respect to optimizing your operational efficiency. Certainly, there are industries, products and services, that require 6 Sigma quality performance or greater, but they are the exception rather than the rule. The goal of Lean Six Sigma should never be about chasing higher and higher capability indices, rather to optimize customer value and secure competitive advantage. Capability indices are a contrived metric to give management a snapshot view of our overall process performance. A myopic focus on driving process capability improvement for the sake of increasing the process' Ppk value does not necessarily optimize value for the customer. Often, such a practice leads to over-engineered products - needlessly adding cost without achieving higher sales or greater market share. If you are already better than the competition, and the customer cannot perceive greater value - and is not willing to pay more or buy more - allocate your resources to other more important improvement opportunities. Improving quality for quality's sake is a major reason why TQM (Total Quality Management) efforts failed so miserably in the 1980s.
My general rule of thumb recommendation is to target for Ppk = 1.33. I also remind organizations that in practice there are often conflicting and competing customer requirements. It may not be possible or desirable to drive every customer CTQ (critical to quality) characteristic to Ppk >= 1.33 due to trade-offs in product design. The criticality of the defect should also be considered when setting an improvement goal. For example, processes with defects potentially causing serious injury - or worse - should demonstrate much greater short-term and long-term capability than less critical defects.
Far too often; however, reported Ppk values of 2.0 or greater are not representative of the true process performance. Organizations continue to experience elevated complaint levels and/or high DPPM values, raising serious questions as to how the capability indices are calculated. How might this happen? I have summarized several scenarios below:
Cpk, Ppk values are calculated only on sorted, shipped product -
This incorrect calculation of Ppk is often justified because "it represents the quality level of product going to the customer". WRONG. Inspection and sorting is never 100% accurate. One is led into false security believing they have sorted out all bad product - especially where the process is not stable, you cannot 100% inspect, and in the case where test methods are destructive in nature. (True test error cannot be determined). Capability indices must be calculated on all output of the process - good and bad, scrapped and shipped.
Cpk, Ppk values are estimated on subgroup averages -
Capability indices should almost always be calculated from individuals data. Your customer experiences variation between individual units, not averages. Variability of subgroup averages will always be smaller than the variation among individuals, resulting in a smaller standard deviation, thereby inflating your process capability metric. For more information refer to the statistical concept of the Central Limit Theorem.
Cpk, Ppk are estimated on an improperly selected sample -
Related to the first two points is the issue of proper sample selection. How assured are you that your testing frequency and sample selection is truly representative of the product reaching the customer? When was the last time you verified that current sampling plans represent all of the variation in the "lot"? Is your test sampling plan based on tribal knowledge (this is the way we have always done it) or is your sampling based on statistically-validated Components of Variance studies? Have such studies been performed following process or product modifications (Management of Change)?
Your specifications are very wide -
An unusually large spread between your specification limits can also result in a large Ppk, Cpk value. The issue here is whether your customers can accept this amount of potential variability in your product performance. Unless your organizational culture is committed to a "Run to Target" mindset and you have honest to goodness working (i.e. effective) process controls, wide specification limits only provide temptation to release product that deviates from the norm as long as it is in spec. Capability Indices are point-estimates; they vary over time depending on the sample collected, and they do not drive daily production decisions. If the customer desperately needs the product it is only human nature to find a way to ship "suspect" product - retest, slit and salvage, etc. How often have you heard the phrase or something similar: "When in doubt ship it out"? Generally the outcome is if the customer wants it bad, they will get it... bad.
Your test methods do not predict fitness for use -
If your capability indices are large but you continue to receive frequent complaints and experience high DPPM, it might also be the result of inadequate test methods. Customer requirements are constantly changing. When was the last time you validated your customer requirements? This issue of validating and re-validating Voice of Customer influences how specification limits are set (see point above), but also helps confirm whether your manufacturing-friendly tests are adequate and relevant. Test Methods MSA studies (e.g. Gage R&R) to evaluate and improve test method capability are definitely important towards reducing overall process variability, but very repeatable and reproducible tests are meaningless if the test method itself does not predict fitness for use. Large Cpk, Ppk of a process as measured by an irrelevant test method cannot assure customer satisfaction.
Your process is not stable; a trend exists -
In cases where short-term variation is much smaller than long-term variation, the Cp metric can be unusually large in comparison to the Ppk. For example, the variability between successive measurements (individuals) is relatively small, but the overall process is trending upward or downward.
These six examples highlight the more common reasons for disconnects between internal measures of quality and customer perceptions of quality. These distinctions should be monitored and tracked over time in a balanced scorecard.
If your process capability is truly approaching 2.0 or greater, and you are not experiencing customer complaints it tells me that you are quite possibly leaving money on the table with respect to optimizing your operational efficiency. Certainly, there are industries, products and services, that require 6 Sigma quality performance or greater, but they are the exception rather than the rule. The goal of Lean Six Sigma should never be about chasing higher and higher capability indices, rather to optimize customer value and secure competitive advantage. Capability indices are a contrived metric to give management a snapshot view of our overall process performance. A myopic focus on driving process capability improvement for the sake of increasing the process' Ppk value does not necessarily optimize value for the customer. Often, such a practice leads to over-engineered products - needlessly adding cost without achieving higher sales or greater market share. If you are already better than the competition, and the customer cannot perceive greater value - and is not willing to pay more or buy more - allocate your resources to other more important improvement opportunities. Improving quality for quality's sake is a major reason why TQM (Total Quality Management) efforts failed so miserably in the 1980s.
My general rule of thumb recommendation is to target for Ppk = 1.33. I also remind organizations that in practice there are often conflicting and competing customer requirements. It may not be possible or desirable to drive every customer CTQ (critical to quality) characteristic to Ppk >= 1.33 due to trade-offs in product design. The criticality of the defect should also be considered when setting an improvement goal. For example, processes with defects potentially causing serious injury - or worse - should demonstrate much greater short-term and long-term capability than less critical defects.
Far too often; however, reported Ppk values of 2.0 or greater are not representative of the true process performance. Organizations continue to experience elevated complaint levels and/or high DPPM values, raising serious questions as to how the capability indices are calculated. How might this happen? I have summarized several scenarios below:
Cpk, Ppk values are calculated only on sorted, shipped product -
This incorrect calculation of Ppk is often justified because "it represents the quality level of product going to the customer". WRONG. Inspection and sorting is never 100% accurate. One is led into false security believing they have sorted out all bad product - especially where the process is not stable, you cannot 100% inspect, and in the case where test methods are destructive in nature. (True test error cannot be determined). Capability indices must be calculated on all output of the process - good and bad, scrapped and shipped.
Cpk, Ppk values are estimated on subgroup averages -
Capability indices should almost always be calculated from individuals data. Your customer experiences variation between individual units, not averages. Variability of subgroup averages will always be smaller than the variation among individuals, resulting in a smaller standard deviation, thereby inflating your process capability metric. For more information refer to the statistical concept of the Central Limit Theorem.
Cpk, Ppk are estimated on an improperly selected sample -
Related to the first two points is the issue of proper sample selection. How assured are you that your testing frequency and sample selection is truly representative of the product reaching the customer? When was the last time you verified that current sampling plans represent all of the variation in the "lot"? Is your test sampling plan based on tribal knowledge (this is the way we have always done it) or is your sampling based on statistically-validated Components of Variance studies? Have such studies been performed following process or product modifications (Management of Change)?
Your specifications are very wide -
An unusually large spread between your specification limits can also result in a large Ppk, Cpk value. The issue here is whether your customers can accept this amount of potential variability in your product performance. Unless your organizational culture is committed to a "Run to Target" mindset and you have honest to goodness working (i.e. effective) process controls, wide specification limits only provide temptation to release product that deviates from the norm as long as it is in spec. Capability Indices are point-estimates; they vary over time depending on the sample collected, and they do not drive daily production decisions. If the customer desperately needs the product it is only human nature to find a way to ship "suspect" product - retest, slit and salvage, etc. How often have you heard the phrase or something similar: "When in doubt ship it out"? Generally the outcome is if the customer wants it bad, they will get it... bad.
Your test methods do not predict fitness for use -
If your capability indices are large but you continue to receive frequent complaints and experience high DPPM, it might also be the result of inadequate test methods. Customer requirements are constantly changing. When was the last time you validated your customer requirements? This issue of validating and re-validating Voice of Customer influences how specification limits are set (see point above), but also helps confirm whether your manufacturing-friendly tests are adequate and relevant. Test Methods MSA studies (e.g. Gage R&R) to evaluate and improve test method capability are definitely important towards reducing overall process variability, but very repeatable and reproducible tests are meaningless if the test method itself does not predict fitness for use. Large Cpk, Ppk of a process as measured by an irrelevant test method cannot assure customer satisfaction.
Your process is not stable; a trend exists -
In cases where short-term variation is much smaller than long-term variation, the Cp metric can be unusually large in comparison to the Ppk. For example, the variability between successive measurements (individuals) is relatively small, but the overall process is trending upward or downward.
These six examples highlight the more common reasons for disconnects between internal measures of quality and customer perceptions of quality. These distinctions should be monitored and tracked over time in a balanced scorecard.
Subscribe to:
Posts (Atom)