Premium
Analyzing and Improving Measurement Systems: A Key to Effective Decision‐making
Author(s) -
Montgomery Douglas C.,
Brombacher Aarnout C.
Publication year - 2006
Publication title -
quality and reliability engineering international
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.913
H-Index - 62
eISSN - 1099-1638
pISSN - 0748-8017
DOI - 10.1002/qre.797
Subject(s) - key (lock) , computer science , citation , reliability (semiconductor) , quality (philosophy) , information retrieval , operations research , library science , engineering , computer security , power (physics) , philosophy , physics , epistemology , quantum mechanics
Many papers in this journal have addressed analysis and improvement of quality systems using a wide range of different methods and tools. Unfortunately, only a minority of these papers address the application of the mentioned methods and tools in the field. This editorial deals with an often neglected, but extremely important part of any operational quality system: measurements. Measurements are a significant component of any quality system. Practitioners familiar with Six Sigma know that measurement is an integral component of the DMAIC problem-solving process, but it is even more important than that. An ineffective measurement system can have a dramatic impact on business performance because it leads to uninformed (and usually bad) decision-making. Most engineers and quality professionals are familiar with the two Rs of measurement systems capability, Repeatability (do we get the same observed value if we measure the same unit several time under identical conditions) and Reproducibility (how much difference in observed values do we experience when units are measured under different conditions, such as different operators, time periods, and so forth). Many publications have been written on how to perform measurement systems capability or ‘gauge R&R’ studies, and how to interpret the outcome. Inadequate repeatability or reproducibility can usually be traced to problems with instrument calibration or condition, operator training or experience, environmental conditions, poorly written or implemented standard procedures, or measurement systems resolution. Some of the criteria for evaluation of measurement capability include the very widely used precision-to-tolerance (P/T ) ratio, the signal-to-noise ratio, and the discrimination ratio. The problem with these approaches to evaluating measurement systems capability is that they answer only indirectly (if at all) the really fundamental question: is the system able to distinguish between good and bad units? That is, what is the probability that a good unit is judged to be defective and conversely, that a bad unit is passed along to the customer as good? These misclassification probabilities are fairly easy to calculate from the results of a standard measurement systems capability study, and give reliable, useful, and easy-to-understand information about measurement systems performance. It turns out that there is not strong correlation between the standard measures of measurement systems capability (such as P/T ratios) and these misclassification probabilities. It is also not too difficult to construct confidence intervals on these misclassification probabilities. This suggests that some of the standard practices about reporting the outcome of the typical measurement systems capability studies need to change, as does the statistical software that supports these studies. In addition to the well-known repeatability and reproducibility, there are other important, often ignored, aspects of measurement systems capability. The linearity of a measurement system reflects the differences in observed accuracy and/or precision experienced over the range of measurements made by the system. A simple linear regression model is often used to describe this feature. Problems with linearity are often the result of calibration and maintenance issues. Stability, or different levels of variability in different operating regimes, can result from warm-up effects, environmental factors, inconsistent operator performance, and inadequate standard operating procedure. Bias reflects the difference between observed measurements and a ‘true’ value obtained from a master or ‘gold’ standard, or from a different measurement technique known to produce accurate values. It is very difficult to monitor, control, improve, or effectively manage a process with an inadequate measurement system. It is somewhat analogous to navigating a ship through fog without radar: eventually you are going to hit the iceberg! Even if no catastrophe occurs, you are always going to be wasting time and money looking for problems when none exist and dealing with unhappy customers who have received a defective product. As excessive measurement variability becomes part of overall product variability, it also has