overview of statistical methods...

17
OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods How would you respond to the following question? A sample of 100 bottles taken from a ¢lling process has an average of 11.95 ounces and a standard deviation of 0.1 ounce. The speci¢cations are 11.9^ 12.1 ounces. Based on these results, should you a. Do nothing? b. Adjust the average to precisely 12 ounces? Overview of statistical methods 283 Figure 9.5. Linearity illustrated.

Upload: vanquynh

Post on 26-Aug-2018

246 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

OVERVIEW OF STATISTICAL METHODSEnumerative versus analytic statistical methods

How would you respond to the following question?

A sample of 100 bottles taken from a ¢lling process has an average of 11.95ounces and a standard deviation of 0.1 ounce. The speci¢cations are 11.9^12.1 ounces. Based on these results, should you

a. Do nothing?b. Adjust the average to precisely 12 ounces?

Overview of statistical methods 283

Figure 9.5. Linearity illustrated.

Tom
Line
START
Tom
Line
Tom
Line
Page 2: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

c. Compute a confidence interval about the mean and adjust theprocess if the nominal fill level is not within the confidenceinterval?

d. None of the above?

The correct answer is d, none of the above. The other choices all make themistake of applying enumerative statistical concepts to an analytic statisticalsituation. In short, the questions themselves are wrong! For example, based onthe data, there is no way to determine if doing nothing is appropriate. ‘‘Doingsomething’’ implies that there is a known cause and effect mechanism whichcan be employed to reach a known objective. There is nothing to suggest thatthis situation exists. Thus, we can’t simply adjust the process average to thenominal value of 12 ounces, even though the process appears to be 5 standarderrors below this value. This might have happened because the first 50 were 10standard errors below the nominal and the last 50 were exactly equal to the nom-inal (or any of a nearly infinite number of other possible scenarios). The confi-dence interval calculation fails for the same reason. Figure 9.6 illustrates someprocesses that could produce the statistics provided above.

Some appropriate analytic statistics questions might be:. Is the process central tendency stable over time?. Is the process dispersion stable over time?. Does the process distribution change over time?If any of the above are answered ‘‘no,’’ then what is the cause of the insta-

bility? To help answer this question, ask ‘‘what is the nature of the variation asrevealed by the patterns?’’ when plotted in time-sequence and stratified in var-ious ways.

If none of the above are answered ‘‘no,’’ then, and only then, we can ask suchquestions as

. Is the process meeting the requirements?

. Can the process meet the requirements?

. Can the process be improved by recentering it?

. How can we reduce variation in the process?

WHATARE ENUMERATIVE ANDANALYTICSTUDIES?

Deming (1975) defines enumerative and analytic studies as follows:Enumerative study�a study in which action will be taken on the universe.Analytic study�a study in which action will be taken on a process to

improve performance in the future.

284 BASIC PRINCIPLES of MEASUREMENT

Page 3: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

The term ‘‘universe’’ is defined in the usual way: the entire group of interest,e.g., people, material, units of product, which possess certain properties of inter-est. An example of an enumerative study would be sampling an isolated lot todetermine the quality of the lot.

In an analytic study the focus is on a process and how to improve it. The focusis the future. Thus, unlike enumerative studies which make inferences aboutthe universe actually studied, analytic studies are interested in a universewhich has yet to be produced. Table 9.2 compares analytic studies with enu-merative studies (Provost, 1988).

Deming (1986) points out that ‘‘Analysis of variance, t-tests, confidenceintervals, and other statistical techniques taught in the books, however inter-esting, are inappropriate because they provide no basis for prediction andbecause they bury the information contained in the order of production.’’

Overview of statistical methods 285

Figure 9.6. Possible processes with similar means and sigmas.

Page 4: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

These traditional statistical methods have their place, but they are widelyabused in the real world. When this is the case, statistics do more to cloud theissue than to enlighten.

Analytic study methods provide information for inductive thinking, ratherthan the largely deductive approach of enumerative statistics. Analytic meth-ods are primarily graphical devices such as run charts, control charts, his-tograms, interrelationship digraphs, etc. Analytic statistics provideoperational guidelines, rather than precise calculations of probability. Thus,such statements as ‘‘There is a 0.13% probability of a Type I error when actingon a point outside a three-sigma control limit’’ are false (the author admitsto having made this error in the past). The future cannot be predicted with aknown level of confidence. Instead, based on knowledge obtained from everysource, including analytic studies, one can state that one has a certain degreeof belief (e.g., high, low) that such and such will result from such and suchaction on a process.

Another difference between the two types of studies is that enumerativestatistics proceed from predetermined hypotheses while analytic studies try

286 BASIC PRINCIPLES of MEASUREMENT

Table 9.2. Important aspects of analytic studies.

ITEM ENUMERATIVE STUDY ANALYTIC STUDY

Aim Parameter estimation Prediction

Focus Universe Process

Method ofaccess

Counts, statistics Models of the process(e.g., £ow charts, causeand e¡ects, mathematicalmodels)

Major source ofuncertainty

Sampling variation Extrapolation into thefuture

Uncertaintyquanti¢able?

Yes No

Environmentfor the study

Static Dynamic

Page 5: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

to help the analyst generate new hypotheses. In the past, this extremely worth-while approach has been criticized by some statisticians as ‘‘fishing’’ or ‘‘ratio-nalizing.’’ However, this author believes that using data to develop plausibleexplanations retrospectively is a perfectly legitimate way of creating new the-ories to be tested. To refuse to explore possibilities suggested by data is totake a very limited view of the scope of statistics in quality improvement andcontrol.

Enumerative statistical methodsThis section discusses the basic concept of statistical inference. The reader

should also consult the Glossary in the Appendix for additional information.Inferential statistics belong to the enumerative class of statistical methods.

The term inference is defined as 1) the act or process of deriving logical con-clusions from premises known or assumed to be true, or 2) the act of reasoningfrom factual knowledge or evidence. Inferential statistics provide informationthat is used in the process of inference. As can be seen from the definitions, infer-ence involves two domains: the premises and the evidence or factual knowledge.Additionally, there are two conceptual frameworks for addressing premisesquestions in inference: the design-based approach and the model-basedapproach.

As discussed by Koch and Gillings (1983), a statistical analysis whose onlyassumptions are random selection of units or random allocation of units toexperimental conditions results in design-based inferences; or, equivalently,randomization-based inferences. The objective is to structure sampling suchthat the sampled population has the same characteristics as the target popula-tion. If this is accomplished then inferences from the sample are said to haveinternal validity. A limitation on design-based inferences for experimental stu-dies is that formal conclusions are restricted to the finite population of sub-jects that actually received treatment, that is, they lack external validity.However, if sites and subjects are selected at random from larger eligiblesets, then models with random effects provide one possible way of addressingboth internal and external validity considerations. One important considera-tion for external validity is that the sample coverage includes all relevantsubpopulations; another is that treatment differences be homogeneous acrosssubpopulations. A common application of design-based inference is thesurvey.

Alternatively, if assumptions external to the study design are required toextend inferences to the target population, then statistical analyses based onpostulated probability distributional forms (e.g., binomial, normal, etc.) orother stochastic processes yield model-based inferences. A focus of distinction

Overview of statistical methods 287

Page 6: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

between design-based and model-based studies is the population to which theresults are generalized rather than the nature of the statistical methods applied.When using a model-based approach, external validity requires substantive jus-tification for the model’s assumptions, as well as statistical evaluation of theassumptions.

Statistical inference is used to provide probabilistic statements regarding ascientific inference. Science attempts to provide answers to basic questions,such as can this machine meet our requirements? Is the quality of this lotwithin the terms of our contract? Does the new method of processing producebetter results than the old? These questions are answered by conducting anexperiment, which produces data. If the data vary, then statistical inferenceis necessary to interpret the answers to the questions posed. A statisticalmodel is developed to describe the probabilistic structure relating theobserved data to the quantity of interest (the parameters), i.e., a scientifichypothesis is formulated. Rules are applied to the data and the scientifichypothesis is either rejected or not. In formal tests of a hypothesis, there areusually two mutually exclusive and exhaustive hypotheses formulated: a nullhypothesis and an alternate hypothesis. Formal hypothesis testing is discussedlater in this chapter.

DISCRETE ANDCONTINUOUSDATAData are said to be discrete when they take on only a finite number of points

that can be represented by the non-negative integers. An example of discretedata is the number of defects in a sample. Data are said to be continuous whenthey exist on an interval, or on several intervals. An example of continuousdata is the measurement of pH. Quality methods exist based on probabilityfunctions for both discrete and continuous data.

METHODSOFENUMERATIONEnumeration involves counting techniques for very large numbers of poss-

ible outcomes. This occurs for even surprisingly small sample sizes. In SixSigma, these methods are commonly used in a wide variety of statistical proce-dures.

The basis for all of the enumerative methods described here is the multi-plication principle. The multiplication principle states that the number of poss-ible outcomes of a series of experiments is equal to the product of the numberof outcomes of each experiment. For example, consider flipping a coin twice.On the first flip there are two possible outcomes (heads/tails) and on the second

288 BASIC PRINCIPLES of MEASUREMENT

Tom
Line
Tom
Line
Page 7: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

Operational definitionsAn operational definition is defined as a requirement that includes a means

of measurement. ‘‘High quality solder’’ is a requirement that must be operatio-nalized by a clear definition of what ‘‘high quality solder’’ means. This mightinclude verbal descriptions, magnification power, photographs, physical com-parison specimens, and many more criteria.

EXAMPLESOFOPERATIONALDEFINITIONS

1. Operational de¢nition of the Ozone Transport Assessment Group’s(OTAG) goal

Goal: To identify reductions and recommend transported ozoneand its precursors which, in combination with other measures, willenable attainment and maintenance of the ozone standard in theOTAG region.

348 MEASUREMENT SYSTEMS ANALYSIS

MeasurementConcept

Interpretation forAttribute Data Suggested Metrics and Comments

Stability The variabilitybetween attributeR&R studies atdi¡erent times.

‘‘Linearity’’ When an inspectorevaluates itemscovering the fullset of categories,her classi¢cationsare consistentacross thecategories.

Range of inaccuracy and bias across all categories.

Requires knowledge of the ‘‘true’’ value.

Note: Because there is no natural ordering fornominal data, the concept of linearity doesn’t reallyhave a precise analog for attribute data on this scale.However, the suggested metrics will highlightinteractions between inspectors and speci¢c categories.

Metric Stability Measure for Metric

Repeatability Standard deviation ofrepeatabilities

Reproducibility Standard deviation ofreproducibilities

Accuracy Standard deviation of accuracies

Bias Average bias

Table 10.7 (cont.)

Tom
Line
Tom
Line
Page 8: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

Suggested operational de¢nition of the goal:1. A general modeled reduction in ozone and ozone precursors

aloft throughout the OTAG region; and2. A reduction of ozone and ozone precursors both aloft and at

ground level at the boundaries of non-attainment area modelingdomains in the OTAG region; and

3. A minimization of increases in peak ground level ozone concen-trations in the OTAG region. (This component of the opera-tional de¢nition is in review.)

2. Wellesley College Child Care Policy Research Partnership operationalde¢nition of unmet need1. Standard of comparison to judge the adequacy of neighborhood ser-

vices: the median availability of services in the larger region(Hampden County).

2. Thus, our de¢nition of unmet need: The di¡erence between the careavailable in the neighborhood and themedian level of care in the sur-rounding region (stated in terms of child care slots indexed to theage-appropriate child population�‘‘slots-per-tots’’).

3. Operational de¢nitions of acids and bases1. An acid is any substance that increases the concentration of the H+

ion when it dissolves in water.2. A base is any substance that increases the concentration of the OH^

ion when it dissolves in water.4. Operational de¢nition of ‘‘intelligence’’

1. Administer the Stanford-Binet IQ test to a person and score theresult. The person’s intelligence is the score on the test.

5. Operational de¢nition of ‘‘dark blue carpet’’A carpet will be deemed to be dark blue if1. Judged by an inspector medically certi¢ed as having passed the U.S.

Air Force test for color-blindness1.1. Itmatches the PANTONEcolor card 7462Cwhenboth carpet

and card are illuminated byGE ‘‘cool white’’ £uorescent tubes;1.2. Card and carpet are viewed at a distance between 16 inches and

24 inches.

HOWTOCONDUCTATTRIBUTE INSPECTIONSTUDIES

Some commonly used approaches to attribute inspection analysis are shownin Table 10.8.

Attribute measurement error analysis 349

Tom
Line
Tom
Line
Page 9: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

assumptions, and it is simple. Resampling doesn’t impose as much baggagebetween the engineering problem and the statistical result as conventionalmethods. It can also be used for more advanced problems, such as modeling,design of experiments, etc.

For a discussion of the theory behind resampling, see Efron (1982). For a pre-sentation of numerous examples using a resampling computer program seeSimon (1992).

PRINCIPLES OF STATISTICAL PROCESS CONTROLTerms and conceptsDISTRIBUTIONS

A central concept in statistical process control (SPC) is that every measur-able phenomenon is a statistical distribution. In other words, an observed setof data constitutes a sample of the effects of unknown common causes. It fol-lows that, after we have done everything to eliminate special causes of varia-tions, there will still remain a certain amount of variability exhibiting the stateof control. Figure 9.25 illustrates the relationships between common causes,special causes, and distributions.

There are three basic properties of a distribution: location, spread, andshape. The location refers to the typical value of the distribution, such as themean. The spread of the distribution is the amount by which smaller valuesdiffer from larger ones. The standard deviation and variance are measures ofdistribution spread. The shape of a distribution is its pattern�peakedness,symmetry, etc. A given phenomenon may have any one of a number of distri-bution shapes, e.g., the distribution may be bell-shaped, rectangular-shaped,etc.

318 BASIC PRINCIPLES of MEASUREMENT

Figure 9.25.Distributions.From Continuing Process Control and Process Capability Improvement, p. 4a. Copyright

# 1983 by Ford Motor Company. Used by permission of the publisher.

Tom
Line
Tom
Line
Page 10: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

CENTRAL LIMIT THEOREMThe central limit theorem can be stated as follows:

Irrespective of the shape of the distribution of the population or universe,the distribution of average values of samples drawn from that universewill tend toward a normal distribution as the sample size grows withoutbound.

It can also be shown that the average of sample averages will equal the averageof the universe and that the standard deviation of the averages equals the stan-dard deviation of the universe divided by the square root of the sample size.Shewhart performed experiments that showed that small sample sizes wereneeded to get approximately normal distributions from even wildly non-normaluniverses. Figure 9.26 was created by Shewhart using samples of four measure-ments.

Principles of statistical process control 319

Figure 9.26. Illustration of the central limit theorem.From Economic Control of Quality of Manufactured Product, ¢gure 59. Copyright# 1931,

1980 by ASQCQuality Press. Used by permission of the publisher.

Page 11: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

The practical implications of the central limit theorem are immense.Consider that without the central limit theorem effects, we would have todevelop a separate statistical model for every non-normal distribution encoun-tered in practice. This would be the only way to determine if the system wereexhibiting chance variation. Because of the central limit theorem we can useaverages of small samples to evaluate any process using the normal distribution.The central limit theorem is the basis for themost powerful of statistical processcontrol tools, Shewhart control charts.

Objectives and benefitsWithout SPC, the bases for decisions regarding quality improvement are

based on intuition, after-the-fact product inspection, or seat-of-the-pants ‘‘dataanalysis.’’ SPC provides a scientific basis for decisions regarding processimprovement.

PREVENTIONVERSUSDETECTIONA process control system is essentially a feedback system that links process

outcomes with process inputs. There are four main elements involved, the pro-cess itself, information about the process, action taken on the process, andaction taken on the output from the process. Theway these elements fit togetheris shown in Figure 9.27.

320 BASIC PRINCIPLES of MEASUREMENT

Figure 9.27. A process control system.

Page 12: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

By the process, we mean the whole combination of people, equipment,input materials, methods, and environment that work together to produceoutput. The performance information is obtained, in part, from evaluationof the process output. The output of a process includes more than product,it also includes information about the operating state of the process such astemperature, cycle times, etc. Action taken on a process is future-oriented inthe sense that it will affect output yet to come. Action on the output is past-oriented because it involves detecting out-of-specification output that hasalready been produced.

There has been a tendency in the past to concentrate attention on the detec-tion-oriented strategy of product inspection. With this approach, we wait untilan output has been produced, then the output is inspected and either acceptedor rejected. SPC takes you in a completely different direction: improvement inthe future. A key concept is the smaller the variation around the target, the bet-ter. Thus, under this school of thought, it is not enough to merely meet therequirements; continuous improvement is called for even if the requirementsare already being met. The concept of never-ending, continuous improvementis at the heart of SPC and Six Sigma.

Common and special causes of variationShewhart (1931, 1980) defined control as follows:

A phenomenon will be said to be controlled when, through the use of pastexperience, we can predict, at least within limits, how the phenomenonmay be expected to vary in the future. Here it is understood that predic-tion within limits means that we can state, at least approximately, theprobability that the observed phenomenonwill fall within the given limits.

The critical point in this definition is that control is not defined as the com-plete absence of variation. Control is simply a state where all variation is pre-dictable variation. A controlled process isn’t necessarily a sign of goodmanagement, nor is an out-of-control process necessarily producing non-con-forming product.

In all forms of prediction there is an element of risk. For our purposes, wewill call any unknown random cause of variation a chance cause or a commoncause, the terms are synonymous and will be used as such. If the influence ofany particular chance cause is very small, and if the number of chance causesof variation are very large and relatively constant, we have a situation wherethe variation is predictable within limits. You can see from the definitionabove, that a system such as this qualifies as a controlled system. Where Dr.Shewhart used the term chance cause, Dr. W. Edwards Deming coined the

Principles of statistical process control 321

Page 13: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

322

Figure 9.28. Should these variations be left to chance?From Economic Control of Quality of Manufactured Product, p. 13. Copyright# 1931, 1980 by

ASQCQuality Press. Used by permission of the publisher.

Figure 9.29. Types of variation.

Page 14: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

term common cause to describe the same phenomenon. Both terms are encoun-tered in practice.

Needless to say, not all phenomena arise from constant systems of commoncauses. At times, the variation is caused by a source of variation that is not partof the constant system. These sources of variation were called assignable causesby Shewhart, special causes of variation by Deming. Experience indicates that

Principles of statistical process control 323

Figure 9.30. Charts from Figure 9.28 with control limits shown.From Economic Control of Quality of Manufactured Product, p. 13. Copyright# 1931, 1980

by ASQCQuality Press. Used by permission of the publisher.

Page 15: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

special causes of variation can usually be found without undue difficulty, lead-ing to a process that is less variable.

Statistical tools are needed to help us effectively separate the effects of specialcauses of variation from chance cause variation. This leads us to another defini-tion:

Statistical process control�the use of valid analytical statistical methods toidentify the existence of special causes of variation in a process.

The basic rule of statistical process control is:

Variation from common-cause systems should be left to chance, butspecial causes of variation should be identi¢ed and eliminated.

This is Shewhart’s original rule. However, the rule should not be misinter-preted as meaning that variation from common causes should be ignored.Rather, common-cause variation is explored ‘‘off-line.’’ That is, we look forlong-term process improvements to address common-cause variation.

Figure 9.28 illustrates the need for statistical methods to determine thecategory of variation.

The answer to the question ‘‘should these variations be left to chance?’’ canonly be obtained through the use of statistical methods. Figure 9.29 illustratesthe basic concept.

In short, variation between the two ‘‘control limits’’ designated by the dashedlines will be deemed as variation from the common-cause system. Any variabil-ity beyond these fixed limits will be assumed to have come from special causesof variation. We will call any system exhibiting only common-cause variation,‘‘statistically controlled.’’ It must be noted that the control limits are not simplypulled out of the air, they are calculated from actual process data using valid sta-tistical methods. Figure 9.28 is shown below as Figure 9.30, onlywith the controllimits drawn on it; notice that process (a) is exhibiting variations from specialcauses, while process (b) is not. This implies that the type of action needed toreduce the variability in each case is of a different nature.Without statistical gui-dance there could be endless debate over whether special or common causeswere to blame for variability.

324 BASIC PRINCIPLES of MEASUREMENT

Page 16: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

TAMPERINGEFFECTS ANDDIAGNOSISTampering occurs when adjustments are made to a process that is in statisti-

cal control. Adjusting a controlled process will always increase process variabil-ity, an obviously undesirable result. The best means of diagnosing tampering isto conduct a process capability study (see Chapter 13) and to use a controlchart to provide guidelines for adjusting the process.

Perhaps the best analysis of the effects of tampering is from Deming (1986).Deming describes four common types of tampering by drawing the analogy ofaiming a funnel to hit a desired target. These ‘‘funnel rules’’ are described byDeming (1986, p. 328):

1. ‘‘Leave the funnel ¢xed, aimed at the target, no adjustment.’’2. ‘‘At drop k (k ¼ 1, 2, 3, . . .) the marble will come to rest at point zk,

measured from the target. (In other words, zk is the error at drop k.)Move the funnel the distance�zk from the last position. Memory 1.’’

3. ‘‘Set the funnel at each drop right over the spot zk, measured from thetarget. No memory.’’

4. ‘‘Set the funnel at each drop right over the spot (zk) where it last came torest. No memory.’’

Rule #1 is the best rule for stable processes. By following this rule, the pro-cess averagewill remain stable and the variancewill beminimized. Rule#2 pro-duces a stable output but one with twice the variance of rule #1. Rule #3results in a system that ‘‘explodes,’’ i.e., a symmetrical pattern will appear witha variance that increases without bound. Rule#4 creates a pattern that steadilymoves away from the target, without limit (see figure 12.20).

At first glance, one might wonder about the relevance of such apparentlyabstract rules. However, uponmore careful consideration, one findsmany prac-tical situations where these rules apply.

Rule #1 is the ideal situation and it can be approximated by using controlcharts to guide decision-making. If process adjustments are made only whenspecial causes are indicated and identified, a pattern similar to that producedby rule #1 will result.

Rule #2 has intuitive appeal for many people. It is commonly encounteredin such activities as gage calibration (check the standard once and adjust thegage accordingly) or in some automated equipment (using an automatic gage,check the size of the last feature produced and make a compensating adjust-ment). Since the system produces a stable result, this situation can go unnoticedindefinitely. However, as shown by Taguchi (1986), increased variance trans-lates to poorer quality and higher cost.

The rationale that leads to rule#3 goes something like this: ‘‘Ameasurementwas taken and it was found to be 10 units above the desired target. This hap-

Statistical process control (SPC) 429

Page 17: OVERVIEW OF STATISTICAL METHODS …pyzdek.mrooms.net/file.php/1/reading/bb-reading/principles_of... · OVERVIEW OF STATISTICAL METHODS Enumerative versus analytic statistical methods

pened because the process was set 10 units too high. I want the average to equalthe target. To accomplish this I must try to get the next unit to be 10 units toolow.’’ This might be used, for example, in preparing a chemical solution. Whilereasonable on its face, the result of this approach is a wildly oscillating system.

A common example of rule #4 is the ‘‘train-the-trainer’’ method. A masterspends a short time training a group of ‘‘experts,’’ who then train others, whotrain others, etc. An example is on-the-job training. Another is creating a setupby using a piece from the last job. Yet another is a gage calibration systemwhere standards are used to create other standards, which are used to createstill others, and so on. Just how far the final result will be from the ideal dependson how many levels deep the scheme has progressed.

SHORTRUNSTATISTICAL PROCESSCONTROLTECHNIQUES

Short production runs are a way of life withmanymanufacturing companies.In the future, this will be the case even more often. The trend in manufacturinghas been toward smaller production runs with product tailored to the specificneeds of individual customers. Henry Ford’s days of ‘‘the customer can haveany color, as long as it’s black’’ have long since passed.

Classical SPC methods, such as �XX and R charts, were developed in the era ofmass production of identical parts. Production runs often lasted for weeks,months, or even years. Many of the ‘‘SPC rules of thumb’’ currently in use

430 STATISTICAL PROCESS CONTROL TECHNIQUES

1-50 Rule #1 101-150 Rule #351-100 Rule #2 151-200 Rule #4

Figure 12.20. Funnel rule simulation results.

Tom
Line
end reading assignment