Measurement System Analysis (MSA and Gage R&R) Measurement System Analysis A Measurement System Analysis, abbreviated MSA, is a specially designed experiment that seeks to identify the components of variation in the measurement. Just as processes that produce a product vary, the process of taking measurements and data also have variation and produce defects. A Measurement Systems Analysis (MSA) evaluates the inspection method, gages , and the entire process of obtaining the measurements. This provides assurance of the integrity of data used for analysis and to understand the implications of measurement error for decisions made about a product or process. MSA is an important element of Six Sigma methodology and of other quality management systems. MSA analyzes the equipment, operations, procedures, software and gage users that affects the assignment of a number to a measurement characteristic. A Measurement Systems Analysis considers the following: selecting the correct measurement and method, assessing the measuring device, assessing procedures & operators, assessing any measurement interactions, and calculating the measurement uncertainty of individual measurement devices and/or measurement systems. Measurement uncertainty is also called gage error. Common tools and techniques of Measurement Systems Analysis include: calibration studies, fixed effect ANOVA, components of variance, Attribute Gage Study, Gage R&R, ANOVA Gage R&R, Destructive Testing Analysis and others. The technique selected is determined by the characteristics of the measurement system itself. Process Variation v. Specification Tolerance Range Measurement uncertainty or gage error is determined as a percentage of either tolerance range for a specific product characteristic or the total process variation (tpv or pv). Many companies insist that the gage error be determined using the tolerance range. It is also informative to calculate gage error using process variation. By using the tpv/pv in our calculations, as well, we can determine how much variation our gage system is contributing to overall process variation. If that contribution is 30% or greater we need to make improvements to the measurement system. Measurement System Analysis Components A measurement system has five components: BIAS - also referred to as Accuracy, is a measure of the distance between the average value of the measurements and the "True" or "Actual" value of the sample or part. Linearity - Linearity is the bias throughout complete operating range of the gage. All gages have bias and we want that bias to be linear. This allows us to know what the bias is at any point in the gages range.
For example, if a car's speedometer is off by 0.5 MPH at 35 MPH and off by 1.0 MPH at 65 MPH the speedometer's bias is NOT linear. Stability - Stability is how long the gage can maintain its acceptable operating status under normal conditions of use. The stability factor gives us a yardstick for determining how often a gage needs to be checked and possibly calibrated. Repeatability - assesses whether the same appraiser can measure the same part/sample multiple times with the same measurement device and get the same value. Repeatability is the amount of variation the appraiser/gage operator contributes to the overall error. Reproducibility - assesses whether different appraisers can measure the same part/sample with the same measurement device and get the same value. Reproducibility is the amount of variation the gage contributes to the overall error. Reproducibility, by default, will also contain some error from repeatability because of the gage user taking the measurements. This repeatability can be removed statistically resulting a more accurate reproducibility factor. Accuracy and Precision Although there are definitions and illustrations for the terms accuracy and precision, it's preferable not to use them in describing the condition of a gage. The terms often have different meanings to different people and the gage's condition could be misinterpreted. Stick with the terms of the five components above and there should be no problems. How Much Gage Error Is Acceptable? As a "rule of thumb" the total of Gage R&R error of 30% or less is acceptable. Use which ever calculation between total process variation and specification tolerance range is the lesser. However, this is a rule of thumb and there can be a substantial difference in acceptable error based on the specific product or characteristic being measured. For example, 30% may be very acceptable for the characteristics of a common construction framing nail. But, totally unacceptable for the valves of an artificial heart. If the measurement system is not capable (error above 30%), error can be normalized by taking multiple measurements and averaging the results. This can be time consuming and expensive, but it can be used to provide reliable measurement data while the measurement system process is undergoing improvement.