Quantifying Variability Using Contributions from Taguchi

By Forrest W. Breyfogle III, from Tools and Manufacturing Engineers Handbook, Volume 7 Continuous Improvement. Society for Manufacturing Engineers, 1993, pages 10-14 through 10-19. Copyright 1994 Smarter Solutions, Austin, Texas.

Why in the 1980’s did the doors of Japanese automobiles often sound better when they were closed than the doors of automobiles manufactured in America? Upon examination we might find that the two types of doors have a similar basic design; however, there are noted differences when several doors of each type are examined. The component parts of the better sounding door typically measure closer to the nominal specification requirements and exhibit less part-to-part variability. A higher quality sound is then possible since the clearance between mating parts is less, resulting in a tighter fit and better sound when the door is closed. This basic “reduction in variability” and “striving for the best dimension” strategy is consistent with the philosophy of Genichi Taguchi, which differs from traditional industrial practices often found in American industry.

The experimentation procedures proposed by Taguchi have brought both acclaim and criticism. Some claim that the procedures are easier to use than classical statistical techniques, while statisticians have noted problems that can lead to erroneous conclusions. However, most statisticians and engineers will probably agree with Taguchi on the issue that, in the past, more emphasis should have been given to the reduction of process variability within the product design and manufacturing processes.

Many articles have been written on the positive and negative aspects of the mechanics used by Taguchi. Rather than dwell upon all the specific mechanics proposed by Taguchi, an overview of a strategy is given here that illustrates the application of Taguchi philosophy with classical statistical tools, at times noting differences between the mechanics.

To illustrate the strategy, consider a specification of 75 ± 3 (i.e., a measurement between 72 and 78 is within specification), where a sample of parts measured the following, relative to this specification:

75.7 75.9 76.1 76.3 76.7 76.8 76.9 77.1 77.4 77.7

A traditional American industry evaluation would be to look at each component part from this type of data as either passing or failing specification. With this line of thinking there often persists either the naive expectation that “all” parts from the population will be within specification or the statement that “out of specification” parts will be discarded/fixed after inspection.

This type of thinking does not typically lead to the consideration of what could be done to improve the process for the purpose of better meeting the needs and desires of the customer. If an industry were to use this alternative line of reasoning, there would be typically less rejected components, and often, less money would be spent on inspection. Instead of evaluating the measurements as pass/fail, the process is better understood if the actual measurement values are examined. One form of pictorially looking at the data is to present the information in the histogram form shown in Figure 1 with an estimate for the shape of the probability density function (e.g., the bell-shaped PDF shown). From this type of plot we see that the data is skewed toward the upper limit, and a noticeable portion of the PDF curve extends beyond the upper specification limit.

image44

The percentage of the total area under the PDF curve beyond the specification limits is an estimate of the total percentage of parts that is expected to be outside these limits. By examining the data from this point of view, we could change our previous “no defect” statement to a percentage failure rate estimate of approximately 3%. However, the accuracy of this estimate is questionable since, firstly, the data may not be normally distributed, which is an important assumption that can dramatically affect an estimated PDF. Secondly, eyeballing this area percentage relative to the total percentage is subject to much error and inconsistencies between people. A better approach to determine this estimate is to make a probability plot of the data.

If data are plotted on normal probability paper and they follow a straight line, then the data are presumed to be from a normal distribution and a percentage of population estimate is obtainable directly from the probability paper. Because of this more direct approach, the inaccuracies from the two previously discussed problems are reduced, yielding a more precise estimate. Some computer programs can create a normal probability plot for the previous data, such as the one shown in Figure 2. This type of plot can also be created manually by using special probability paper (Breyfogle 1992), where the ranked data values matched with plot positions that can be determined from tables or equations.

image45

The normal probability plot shown in Figure 2 for the above set of data indicates (if the samples are random) that approximately 2% (i.e., 100 – 98= 2) of the population would be outside the upper bound of the specification limit of 78. We also note that the median, or mean, is skewed by approximately 1.7 from the nominal specification criterion (i.e., 76.7 – 75.0 = 1.7).

By looking at the actual data values in lieu of making pass/fail component judgments, we are now able to make an assessment with a relatively small sample size that the process mean should be adjusted lower to decrease the amount of unsatisfactory production parts. We also note the amount of variability that the process is producing relative to specification (i.e., 98% of the part-to-part variability consumes approximately 50% of the specification range). In some situations this amount of variability could lead to a noticeable amount of inconsistency in part-to-pan performance, even though all components may be within specification. A Taguchi philosophy stresses the importance of reducing variability; however, management is often more interested in explanations that relate to monetary units, than in part tolerances and data variability. To make this translation, Taguchi suggests using a loss function.

image47

To illustrate the Taguchi loss function, consider first Figure 3, which shows how component loss is typically viewed by American industries. From this figure, it seems reasonable to question the logic of considering that there is no loss if a part is at a specification limit, while another part has a loss value equal to its scrap value if it is barely outside its specification criteria. An alternative is to consider a “loss to society,” as expressed in Taguchi’s Loss Function. Figure 4 shows a quadratic loss function where scrap loss at the specification limits equates to that shown in Figure 3. However, with this strategy a component part has an increasing amount of loss as it deviates from its nominal specification criterion, even though it may be within its specification limits. Unlike Figure 3, the curve in Figure 4 does not have the illogical dramatic shift in loss when a part exactly meets specification versus when it is slightly beyond the specification limits.

image46

A Taguchi loss function strategy emphasizes reducing variability and striving for a process mean that equates to the nominal specification. Companies using the basic philosophy of examining key specifications and striving for a mean of measurements equating to nominal specification values, along with a reduction in data variability, can expect to produce products that are perceived by customers to have higher, more consistent quality. In the television industry, for example, the quality of a television picture would then be expected to be consistently good from television to television (as opposed to a picture on many televisions that is good enough when they barely met specification).

Manufacturers often need to break the paradigm of simply assessing whether component parts are “good enough,” relative to specification limits. More effort should to be given to identifying continuous response specifications that are an important part of meeting the needs and desires of customers. These responses should then be monitored to assess product performance relative to nominal specification criteria. If a probability plot or Taguchi loss function analysis indicates that a response parameter needs improvement, statistical tools are often useful to gain insight to the effects from various process input parameters. Design of Experiments (DOE), for example, can be used to efficiently identify the most beneficial process parameters to change.

Management should consider the questions they are asking of their subordinates. The wording of their current questions might be leading employees to an inefficient 100% go/no-go evaluation, in lieu of an efficient sampling plan using continuous variables. By simply rewording their request, management could get more information with less resources.